Вы находитесь на странице: 1из 493

Table of Contents

Overview
Get started
Certifications
SAP HANA on Azure (Large Instances)
Overview and architecture
Infrastructure and connectivity
Install SAP HANA
High availability and disaster recovery
Troubleshoot and monitor
How to
HA Setup with STONITH
SAP HANA on Azure Virtual Machines
Single instance SAP HANA
S/4 HANA or BW/4 HANA SAP CAL deployment guide
SAP HANA High Availability in Azure VMs
SAP HANA backup overview
SAP HANA file level backup
SAP HANA storage snapshot backups
SAP NetWeaver on Azure Virtual Machines
SAP IDES on Windows/SQL Server SAP CAL deployment guide
SAP NetWeaver on Azure Linux VMs
Plan and implement SAP NetWeaver on Azure
High availability on Windows
High availability on SUSE Linux
Multi-SID configurations
Deployment guide
DBMS deployment guide
Azure Site Recovery for SAP Disaster Recovery
AAD SAP Identity Integration and Single-Sign-On
Integration with SAP Cloud
AAD Integration with SAP Cloud Platform Identity Authentication
Set up Single-Sign-On with SAP Cloud Platform
AAD Integration with SAP NetWeaver
AAD Integration with SAP Business ByDesign
AAD Integration with SAP HANA DBMS
SAP Fiori Launchpad SAML Single Sign-On with Azure AD
Resources
Azure Roadmap
Using Azure for hosting and running SAP workload
scenarios
10/3/2017 10 min to read Edit Online

By choosing Microsoft Azure as your SAP ready cloud partner, you are able to reliably run your mission critical SAP
workloads and scenarios on a scalable, compliant, and enterprise-proven platform. Get the scalability, flexibility,
and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP
applications across dev/test and production scenarios in Azure - and be fully supported. From SAP NetWeaver to
SAP S4/HANA, SAP BI, Linux to Windows, SAP HANA to SQL, we have you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host different other SAP
workload scenarios, like SAP BI on Azure. Documentation regarding SAP NetWeaver deployments on Azure native
Virtual Machines can be found in the section "SAP NetWeaver on Azure Virtual Machines."
Azure has native Azure Virtual Machine offers that are ever growing in size of CPU and memory resources to cover
SAP workload that leverages SAP HANA. For more information on this topic, look up the documents under the
section SAP HANA on Azure Virtual Machines."
The uniqueness of Azure for SAP HANA is a unique offer that sets Azure apart from competition. In order to enable
hosting more memory and CPU resource demanding SAP scenarios involving SAP HANA, Azure offers the usage
of customer dedicated bare-metal hardware for the purpose of running SAP HANA deployments that require up to
20 TB (60 TB scale-out) of memory for S/4HANA or other SAP HANA workload. This unique Azure solution of SAP
HANA on Azure (Large Instances) allows you to run SAP HANA on the dedicated bare-metal hardware with the SAP
application layer or workload middle-ware layer hosted in native Azure Virtual Machines. This solution is
documented in several documents in the section "SAP HANA on Azure (Large Instances)."
Hosting SAP workload scenarios in Azure also can create requirements of Identity integration and Single-Sign-On
using Azure Activity Directory to different SAP components and SAP SaaS or PaaS offers. A list of such integration
and Single-Sign-On scenarios with Azure Active Directory (AAD) and SAP entities is described and documented in
the section "AAD SAP Identity Integration and Single-Sign-On."

SAP HANA on SAP HANA on Azure (Large Instances)


Overview and architecture of SAP HANA on Azure (Large Instances)
Title: Overview and Architecture of SAP HANA on Azure (Large Instances)
Summary: This Architecture and Technical Deployment Guide provides information to help you deploy SAP on the
new SAP HANA on Azure (Large Instances) in Azure. It is not intended to be a comprehensive guide covering
specific setup of SAP solutions, but rather useful information in your initial deployment and ongoing operations. It
should not replace SAP documentation related to the installation of SAP HANA (or the many SAP Support Notes
that cover the topic). It gives you an overview and provides the additional detail of installing SAP HANA on Azure
(Large Instances).
Updated: October 2017
This guide can be found here
Infrastructure and connectivity to SAP HANA on Azure (Large Instances)
Title: Infrastructure and Connectivity to SAP HANA on Azure (Large Instances)
Summary: After the purchase of SAP HANA on Azure (Large Instances) is finalized between you and the Microsoft
enterprise account team, various network configurations are required in order to ensure proper connectivity. This
document outlines the information that has to be shared with the following information is required. This document
outlines what information has to be collected and what configuration scripts have to be run.
Updated: July 2017
This guide can be found here
Install SAP HANA in SAP HANA on Azure (Large Instances)
Title: Install SAP HANA on SAP HANA on Azure (Large Instances)
Summary: This document outlines the setup procedures for installing SAP HANA on your Azure Large Instance.
Updated: July 2017
This guide can be found here
High availability and disaster recovery of SAP HANA on Azure (Large Instances)
Title: High Availability and Disaster Recovery of SAP HANA on Azure (Large Instances)
Summary: High Availability (HA) and Disaster Recovery (DR) are very important aspects of running your mission-
critical SAP HANA on Azure (Large Instances) server(s). It's import to work with SAP, your system integrator,
and/or Microsoft to properly architect and implement the right HA/DR strategy for you. Important considerations
like Recovery Point Objective (RPO) and Recovery Time Objective (RTO), specific to your environment, must be
considered. This document explains your options for enabling your preferred level of HA and DR.
Updated: October 2017
This document can be found here
Troubleshooting and monitoring of SAP HANA on Azure (Large Instances)
Title: Troubleshooting and Monitoring of SAP HANA on Azure (Large Instances)
Summary: This guide covers information that is useful in establishing monitoring of your SAP HANA on Azure
environment as well as additional troubleshooting information.
Updated: August 2017
This document can be found here

SAP HANA on Azure Virtual Machines


Getting started with SAP HANA on Azure
Title: Quickstart guide for manual installation of SAP HANA on Azure VMs
Summary: This quickstart guide helps to set up a single-instance SAP HANA system on Azure VMs by a manual
installation of SAP NetWeaver 7.5 and SAP HANA SP12. The guide presumes that the reader is familiar with Azure
IaaS basics like how to deploy virtual machines or virtual networks either via the Azure portal or Powershell/CLI
including the option to use json templates. Furthermore it's expected that the reader is familiar with SAP HANA,
SAP NetWeaver and how to install it on-premises.
Updated: June 2017
This guide can be found here
S/4HANA SAP CAL deployment on Azure
Title: Deploy SAP S/4HANA or BW/4HANA on Azure
Summary: This guide helps to demonstrate the deployment of SAP S/4HANA on Azure using SAP Cloud Appliance
Library. SAP Cloud Appliance Library is a service by SAP that allows to deploy SAP applications on Azure. The guide
describes step by step the deployment.
Updated: June 2017
This guide can be found here
High Availability of SAP HANA in Azure Virtual Machines
Title: High Availability of SAP HANA on Azure Virtual Machines
Summary: This guide leads you through the high availability configuration of the SUSE 12 OS and SAP HANA to
accommodate HANA System replication with automatic failover. The guide is specific for SUSE and Azure Virtual
Machines. The guide does not apply yet for Red Hat or bare-metal or private cloud or other non-Azure public cloud
deployments.
Updated: June 2017
This guide can be found here
SAP HANA backup overview on Azure VMs
Title: Backup guide for SAP HANA on Azure Virtual Machines
Summary: This guide provides basic information about backup possibilities running SAP HANA on Azure Virtual
Machines.
Updated: March 2017
This guide can be found here
SAP HANA file level backup on Azure VMs
Title: SAP HANA backup based on storage snapshots
Summary: This guide provides information about using snapshot-based backups on Azure VMs when running SAP
HANA on Azure Virtual Machines.
Updated: March 2017
This guide can be found here
SAP HANA snapshot based backups on Azure VMs
Title: SAP HANA Azure Backup on file level
Summary: This guide provides information about using SAP HANA file level backup running SAP HANA on Azure
Virtual Machines
Updated: March 2017
This guide can be found here

SAP NetWeaver deployed on Azure Virtual Machines


Deploy SAP IDES system on Windows and SQL Server through SAP CAL on Azure
Title: Testing SAP NetWeaver on Microsoft Azure SUSE Linux VMs
Summary: This document describes the deployment of an SAP IDES system based on Windows and SQL Server on
Azure using SAP Cloud Appliance Library. SAP Cloud appliance Library is an SAP service that allows the
deployment of SAP products on Azure. This document goes step by step through the deployment of an SAP IDES
system. The IDES system is just an example for several other dozen applications that can be deployed through SAP
Cloud appliance on Microsoft Azure.
Updated: June 2017
This guide can be found here
Quickstart guide for NetWeaver on SUSE Linux on Azure
Title: Testing SAP NetWeaver on Microsoft Azure SUSE Linux VMs
Summary: This article describes various things to consider when you're running SAP NetWeaver on Microsoft
Azure SUSE Linux virtual machines (VMs). SAP NetWeaver is officially supported on SUSE Linux VMs on Azure. All
details regarding Linux versions, SAP kernel versions, and other details can be found in SAP Note 1928533 "SAP
Applications on Azure: Supported Products and Azure VM types".
Updated: September 2016
This guide can be found here
Planning and implementation
Title: Azure Virtual Machines planning and implementation for SAP NetWeaver
Summary: This document is the guide to start with if you are thinking about running SAP NetWeaver in Azure
Virtual Machines. This planning and implementation guide helps you evaluate whether an existing or planned SAP
NetWeaver-based system can be deployed to an Azure Virtual Machines environment. It covers multiple SAP
NetWeaver deployment scenarios, and includes SAP configurations that are specific to Azure. The paper lists and
describes all the necessary configuration information youll need on the SAP/Azure side to run a hybrid SAP
landscape. Measures you can take to ensure high availability of SAP NetWeaver-based systems on IaaS are also
covered.
Updated: June 2017
This guide can be found here
High Availability configurations of SAP NetWeaver in Azure VMs
Title: Azure Virtual Machines high availability for SAP NetWeaver
Summary: In this document, we cover the steps that you can take to deploy high-availability SAP systems in Azure
by using the Azure Resource Manager deployment model. We walk you through these major tasks. In the
document, we describe how single-point-of-failure components like Advanced Business Application Programming
(ABAP) SAP Central Services (ASCS)/SAP Central Services (SCS) and database management systems (DBMS), and
redundant components like SAP Application Server are going to be protected when running in Azure VMs. A step-
by-step example of an installation and configuration of a high-availability SAP system in a Windows Server
Failover Clustering cluster in Azure is demonstrated and shown in this document.
Updated: June 2017
This guide can be found here
Realizing Multi-SID deployments of SAP NetWeaver in Azure VMs
Title: Create an SAP NetWeaver multi-SID configuration
Summary: This document is an addition to the document High availability for SAP NetWeaver on Azure VMs. Due
to new functionality in Azure that got introduced in September 2016, it is possible to deploy multiple SAP
NetWeaver ASCS/SCS instances in a pair of Azure VMs. With such a configuration, you can reduce the number of
VMs necessary to deploy to realize highly available SAP NetWeaver configurations. The guide describes the setup
of such multi-SID configurations.
Updated: December 2016
This guide can be found here
Deployment of SAP NetWeaver in Azure VMs
Title: Azure Virtual Machines deployment for SAP NetWeaver
Summary: This document provides procedural guidance for deploying SAP NetWeaver software to virtual
machines in Azure. This paper focuses on three specific deployment scenarios, with an emphasis on enabling the
Azure Monitoring Extensions for SAP, including troubleshooting recommendations for the Azure Monitoring
Extensions for SAP. This paper assumes that youve read the planning and implementation guide.
Updated: June 2017
This guide can be found here
DBMS deployment guide
Title: Azure Virtual Machines DBMS deployment for SAP NetWeaver
Summary: This paper covers planning and implementation considerations for the DBMS systems that should run in
conjunction with SAP. In the first part, general considerations are listed and presented. The following parts of the
paper relate to deployments of different DBMS in Azure that are supported by SAP. Different DBMS presented are
SQL Server, SAP ASE, and Oracle. In those specific parts, considerations you have to account for when you are
running SAP systems on Azure in conjunction with those DBMS are discussed. Subjects like backup and high
availability methods that are supported by the different DBMS on Azure are presented for the usage with SAP
applications.
Updated: June 2017
This guide can be found here
Using Azure Site Recovery for SAP workload
Title: SAP NetWeaver: Building a Disaster Recovery Solution with Azure Site Recovery
Summary: This document describes the way how Azure Site Recovery services can be used for the purpose of
handling disaster recovery scenarios. Cases where Azure is used as disaster recovery location for an on-premise
SAP landscape using Azure Site Recovery Services. Another scenario described in the document is the Aure-to-
Azure (A2A) disaster recovery case and how it is managed with Azure Site Recovery.
Updated: August 2017
This guide can be found here
SAP certifications and configurations running on
Microsoft Azure
8/10/2017 1 min to read Edit Online

SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP in
order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following tables
outline our supported configurations and list of growing certifications.

SAP HANA certifications


SAP PRODUCT SUPPORTED OS AZURE OFFERINGS

SAP HANA Developer Edition (including Red Hat Enterprise Linux, SUSE Linux D-Series VM family
the HANA client software comprised of Enterprise
SQLODBC, ODBO-Windows only,
ODBC, JDBC drivers, HANA studio, and
HANA database)

HANA One Red Hat Enterprise Linux, SUSE Linux DS14_v2 (upon general availability)
Enterprise

SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Controlled Availability for GS5, SAP
Enterprise HANA on Azure (Large instances)

Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments for
Enterprise non-production scenarios, SAP HANA
on Azure (Large instances)

HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments, SAP
Enterprise HANA on Azure (Large instances)

SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments, SAP
Enterprise HANA on Azure (Large instances)

SAP NetWeaver certifications


Microsoft Azure is certified for the following SAP products, with full support from Microsoft and SAP.

SAP PRODUCT GUEST OS RDBMS VIRTUAL MACHINE TYPES

SAP Business Suite Software Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle SAP ASE DS15_v2, GS1 to GS5
Linux

SAP Business All-in-One Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux SAP ASE DS15_v2, GS1 to GS5
SAP PRODUCT GUEST OS RDBMS VIRTUAL MACHINE TYPES

SAP BusinessObjects BI Windows N/A A5 to A11, D11 to D14,


DS11 to DS14, DS11_v2 to
DS15_v2, GS1 to GS5

SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux SAP ASE DS15_v2, GS1 to GS5
SAP HANA (Large Instances) overview and
architecture on Azure
10/4/2017 44 min to read Edit Online

What is SAP HANA on Azure (Large Instances)?


SAP HANA on Azure (Large Instance) is a unique solution to Azure. In addition to providing Azure Virtual Machines
for the purpose of deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA
on bare-metal servers that are dedicated to you as a customer. The SAP HANA on Azure (Large Instances) solution
builds on non-shared host/server bare-metal hardware that is assigned to you as a customer. The server hardware
is embedded in larger stamps that contain compute/server, networking, and storage infrastructure. Which, as a
combination is HANA TDI certified. The service offer of SAP HANA on Azure (Large Instances) offers various
different server SKUs or sizes starting with units that have 72 CPUs and 768 GB memory to units that have 960
CPUs and 20 TB memory.
The customer isolation within the infrastructure stamp is performed in tenants, which in detail looks like:
Networking: Isolation of customers within infrastructure stack through virtual networks per customer assigned
tenant. A tenant is assigned to a single customer. A customer can have multiple tenants. The network isolation
of tenants prohibits network communication between tenants in the infrastructure stamp level. Even if tenants
belong to the same customer.
Storage components: Isolation through storage virtual machines that have storage volumes assigned to it.
Storage volumes can be assigned to one storage virtual machine only. A storage virtual machine is assigned
exclusively to one single tenant in the SAP HANA TDI certified infrastructure stack. As a result storage volumes
assigned to a storage virtual machine can be accessed in one specific and related tenant only. And are not
visible between the different deployed tenants.
Server or host: A server or host unit is not shared between customers or tenants. A server or host deployed to a
customer, is an atomic bare-metal compute unit that is assigned to one single tenant. No hardware-partitioning
or soft-partitioning is used that could result in you, as a customer, sharing a host or a server with another
customer. Storage volumes that are assigned to the storage virtual machine of the specific tenant are mounted
to such a server. A tenant can have one to many server units of different SKUs exclusively assigned.
Within an SAP HANA on Azure (Large Instance) infrastructure stamp, many different tenants are deployed and
isolated against each other through the tenant concepts on networking, storage, and compute level.
These bare-metal server units are supported to run SAP HANA only. The SAP application layer or workload middle-
ware layer is running in Microsoft Azure Virtual Machines. The infrastructure stamps running the SAP HANA on
Azure (Large Instance) units are connected to the Azure Network backbones, so, that low latency connectivity
between SAP HANA on Azure (Large Instance) units and Azure Virtual Machines is provided.
This document is one of five documents, which cover the topic of SAP HANA on Azure (Large Instance). In this
document, we go through the basic architecture, responsibilities, services provided, and on a high-level through
capabilities of the solution. For most of the areas, like networking and connectivity, the other four documents are
covering details and drill downs. The documentation of SAP HANA on Azure (Large Instance) does not cover
aspects of SAP NetWeaver installation or deployments of SAP NetWeaver in Azure VMs. This topic is covered in
separate documentation found in the same documentation container.
The five parts of this guide cover the following topics:
SAP HANA (large Instance) Overview and Architecture on Azure
SAP HANA (large instances) Infrastructure and connectivity on Azure
How to install and configure SAP HANA (large instances) on Azure
SAP HANA (large instances) High Availability and Disaster Recovery on Azure
SAP HANA (large instances) Troubleshooting and monitoring on Azure

Definitions
Several common definitions are widely used in the Architecture and Technical Deployment Guide. Note the
following terms and their meanings:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
SAP Component: An individual SAP application, such as ECC, BW, Solution Manager, or EP. SAP components
can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as Business
Objects.
SAP Environment: One or more SAP components logically grouped to perform a business function, such as
Development, QAS, Training, DR, or Production.
SAP Landscape: Refers to the entire SAP assets in your IT landscape. The SAP landscape includes all production
and non-production environments.
SAP System: The combination of DBMS layer and application layer of an SAP ERP development system, SAP
BW test system, SAP CRM production system, etc. Azure deployments do not support dividing these two layers
between on-premises and Azure. Means an SAP system is either deployed on-premises, or it is deployed in
Azure. However, you can deploy the different systems of an SAP landscape into either Azure or on-premises.
For example, you could deploy the SAP CRM development and test systems in Azure, while deploying the SAP
CRM production system on-premises. For SAP HANA on Azure (Large Instances), it is intended that you host the
SAP application layer of SAP systems in Azure VMs and the related SAP HANA instance on a unit in the HANA
Large Instance stamp.
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this technical
deployment guide.
Cross-Premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-site,
multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In common Azure
documentation, these kinds of deployments are also described as Cross-Premises scenarios. The reason for the
connection is to extend on-premises domains, on-premises Active Directory/OpenLDAP, and on-premises DNS
into Azure. The on-premises landscape is extended to the Azure assets of the Azure subscription(s). Having this
extension, the VMs can be part of the on-premises domain. Domain users of the on-premises domain can
access the servers and can run services on those VMs (like DBMS services). Communication and name
resolution between VMs deployed on-premises and Azure deployed VMs is possible. Such is the typical
scenario in which most SAP assets are deployed. See the guides of Planning and design for VPN Gateway and
Create a VNet with a Site-to-Site connection using the Azure portal for more detailed information.
Tenant: A customer deployed in HANA Large Instances stamp gets isolated into a "tenant." A tenant is isolated
in the networking, storage, and compute layer from other tenants. So, that storage and compute units assigned
to the different tenants cannot see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants. Even then, there is no
communication between tenants on the HANA Large Instance stamp level.
There are a variety of additional resources that have been published on the topic of deploying SAP workload on
Microsoft Azure public cloud. It is highly recommended that anyone planning and executing a deployment of SAP
HANA in Azure is experienced and aware of the principals of Azure IaaS, and the deployment of SAP workloads on
Azure IaaS. The following resources provide more information and should be referenced before continuing:
Using SAP solutions on Microsoft Azure virtual machines

Certification
Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to support SAP HANA on
certain infrastructures, such as Azure IaaS.
The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note #1928533 SAP
Applications on Azure: Supported Products and Azure VM types.
This SAP Note #2316233 - SAP HANA on Microsoft Azure (Large Instances) is also significant. It covers the solution
described in this guide. Additionally, you are supported to run SAP HANA in the GS5 VM type of Azure. Information
for this case is published on the SAP website.
The SAP HANA on Azure (Large Instances) solution referred to in SAP Note #2316233 provides Microsoft and SAP
customers the ability to deploy large SAP Business Suite, SAP Business Warehouse (BW), S/4 HANA, BW/4HANA,
or other SAP HANA workloads in Azure. The solution is based on the SAP-HANA certified dedicated hardware
stamp (SAP HANA Tailored Datacenter Integration TDI). Running as an SAP HANA TDI configured solution
provides you with the confidence of knowing that all SAP HANA-based applications (including SAP Business Suite
on SAP HANA, SAP Business Warehouse (BW) on SAP HANA, S4/HANA and BW4/HANA) are going to work on the
hardware infrastructure.
Compared to running SAP HANA in Azure Virtual Machines this solution has a benefitit provides for much larger
memory volumes. If you want to enable this solution, there are some key aspects to understand:
The SAP application layer and non-SAP applications run in Azure Virtual Machines (VMs) that are hosted in the
usual Azure hardware stamps.
Customer on-premises infrastructure, data centers, and application deployments are connected to the Microsoft
Azure cloud platform through Azure ExpressRoute (recommended) or Virtual Private Network (VPN). Active
Directory (AD) and DNS are also extended into Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure (Large Instances). The Large
Instance stamp is connected into Azure networking, so software running in Azure VMs can interact with the
HANA instance running in HANA Large Instances.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided in an Infrastructure as a
Service (IaaS) with SUSE Linux Enterprise Server, or Red Hat Enterprise Linux, pre-installed. As with Azure
Virtual Machines, further updates and maintenance to the operating system is your responsibility.
Installation of HANA or any additional components necessary to run SAP HANA on units of HANA Large
instances is your responsibility, as is all respective ongoing operations and administrations of SAP HANA on
Azure.
In addition to the solutions described here, you can install other components in your Azure subscription that
connects to SAP HANA on Azure (Large Instances). For example, components that enable communication with
and/or directly to the SAP HANA database (jump servers, RDP servers, SAP HANA Studio, SAP Data Services for
SAP BI scenarios, or network monitoring solutions).
As in Azure, HANA Large Instances offer supporting High Availability and Disaster Recovery functionality.

Architecture
At a high-level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer residing in Azure
VMs and the database layer residing on SAP TDI configured hardware located in a Large Instance stamp in the
same Azure Region that is connected to Azure IaaS.
NOTE
You need to deploy the SAP application layer in the same Azure Region as the SAP DBMS layer. This rule is well-documented
in published information about SAP workload on Azure.

The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI certified hardware
configuration (non-virtualized, bare metal, high-performance server for the SAP HANA database), and the ability
and flexibility of Azure to scale resources for the SAP application layer to meet your needs.

The architecture shown is divided into three sections:


Right: An on-premises infrastructure running different applications in datacenters with end users accessing
LOB applications (like SAP). Ideally, this on-premises infrastructure is then connected to Azure with Azure
ExpressRoute.
Center: Shows Azure IaaS and, in this case, use of Azure VMs to host SAP or other applications that use SAP
HANA as a DBMS system. Smaller HANA instances that function with the memory Azure VMs provide are
deployed in Azure VMs together with their application layer. Find out more about Virtual Machines.
Azure Networking is used to group SAP systems together with other applications into Azure Virtual
Networks (VNets). These VNets connect to on-premises systems as well as to SAP HANA on Azure (Large
Instances).
For SAP NetWeaver applications and databases that are supported to run in Microsoft Azure, see SAP
Support Note #1928533 SAP Applications on Azure: Supported Products and Azure VM types. For
documentation on deploying SAP solutions on Azure review:
Using SAP on Windows virtual machines (VMs)
Using SAP solutions on Microsoft Azure virtual machines
Left: Shows the SAP HANA TDI certified hardware in the Azure Large Instance stamp. The HANA Large
Instance units are connected to the Azure VNets of your subscription using the same technology as the
connectivity from on-premises into Azure.
The Azure Large Instance stamp itself combines the following components:
Computing: Servers that are based on Intel Xeon E7-8890v3 or Intel Xeon E7-8890v4 processors that provide
the necessary computing capability and are SAP HANA certified.
Network: A unified high-speed network fabric that interconnects the computing, storage, and LAN
components.
Storage: A storage infrastructure that is accessed through a unified network fabric. Specific storage capacity is
provided depending on the specific SAP HANA on Azure (Large Instances) configuration being deployed (more
storage capacity is available at an additional monthly cost).
Within the multi-tenant infrastructure of the Large Instance stamp, customers are deployed as isolated tenants. At
deployment of the tenant, you need to name an Azure subscription within your Azure enrollment. This going to be
the Azure subscription, the HANA Large Instance(s) is going to be billed against. These tenants have a 1:1
relationship to the Azure subscription. Network wise it is possible to access a HANA large Instance unit deployed in
one tenant in one Azure Region from different Azure VNets, which belong to different Azure subscriptions. Though
those Azure subscriptions need to belong to the same Azure enrollment.
As with Azure VMs, SAP HANA on Azure (Large Instances) is offered in multiple Azure regions. In order to offer
Disaster Recovery capabilities, you can choose to opt in. Different Large Instance stamps within one geo-political
region are connected to each other. For example, HANA Large Instance Stamps in US West and US East are
connected through a dedicated network link for the purpose of DR replication.
Just as you can choose between different VM types with Azure Virtual Machines, you can choose from different
SKUs of HANA Large Instances that are tailored for different workload types of SAP HANA. SAP applies memory to
processor socket ratios for varying workloads based on the Intel processor generationsthere are four different
SKU types offered:
As of July 2017, SAP HANA on Azure (Large Instances) is available in several configurations in the Azure Regions of
US West and US East, Australia East, Australia Southeast, West Europe, and North Europe:

SAP SOLUTION CPU MEMORY STORAGE AVAILABILITY

Optimized for OLAP: SAP HANA on Azure 768 GB 3 TB Available


SAP BW, BW/4HANA S72
or SAP HANA for 2 x Intel Xeon
generic OLAP Processor E7-8890 v3
workload 36 CPU cores and 72
CPU threads

--- SAP HANA on Azure 1.5 TB 6 TB Not offered anymore


S144
4 x Intel Xeon
Processor E7-8890 v3
72 CPU cores and
144 CPU threads
SAP SOLUTION CPU MEMORY STORAGE AVAILABILITY

--- SAP HANA on Azure 2.0 TB 8 TB Available


S192
4 x Intel Xeon
Processor E7-8890 v4
96 CPU cores and
192 CPU threads

--- SAP HANA on Azure 4.0 TB 16 TB Ready to Order


S384
8 x Intel Xeon
Processor E7-8890 v4
192 CPU cores and
384 CPU threads

Optimized for OLTP: SAP HANA on Azure 1.5 TB 6 TB Available


SAP Business Suite S72m
on SAP HANA or 2 x Intel Xeon
S/4HANA (OLTP), Processor E7-8890 v3
generic OLTP 36 CPU cores and 72
CPU threads

--- SAP HANA on Azure 3.0 TB 12 TB Not offered anymore


S144m
4 x Intel Xeon
Processor E7-8890 v3
72 CPU cores and
144 CPU threads

--- SAP HANA on Azure 4.0 TB 16 TB Available


S192m
4 x Intel Xeon
Processor E7-8890 v4
96 CPU cores and
192 CPU threads

--- SAP HANA on Azure 6.0 TB 18 TB Ready to Order


S384m
8 x Intel Xeon
Processor E7-8890 v4
192 CPU cores and
384 CPU threads

--- SAP HANA on Azure 8.0 TB 22 TB Ready to Order


S384xm
8 x Intel Xeon
Processor E7-8890 v4
192 CPU cores and
384 CPU threads

--- SAP HANA on Azure 12.0 TB 28 TB Ready to Order


S576
12 x Intel Xeon
Processor E7-8890 v4
288 CPU cores and
576 CPU threads
SAP SOLUTION CPU MEMORY STORAGE AVAILABILITY

--- SAP HANA on Azure 16.0 TB 36 TB Ready to Order


S768
16 x Intel Xeon
Processor E7-8890 v4
384 CPU cores and
768 CPU threads

--- SAP HANA on Azure 20.0 TB 46 TB Ready to Order


S960
20 x Intel Xeon
Processor E7-8890 v4
480 CPU cores and
960 CPU threads

CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of the sum of the processors of
the server unit. All units are configured by default to use Hyper-Threading.
The different configurations above which are Available or are 'Not offered anymore' are referenced in SAP Support
Note #2316233 SAP HANA on Microsoft Azure (Large Instances). The configurations, which are marked as
'Ready to Order' will find their entry into the SAP Note soon. Though, those instance SKUs can be ordered already
for the six different Azure regions the HANA Large Instance service is available.
The specific configurations chosen are dependent on workload, CPU resources, and desired memory. It is possible
for OLTP workload to use the SKUs that are optimized for OLAP workload.
The hardware base for all the offers are SAP HANA TDI certified. However, we distinguish between two different
classes of hardware, which divides the SKUs into:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
It is important to note that a complete HANA Large Instance stamp is not exclusively allocated for a single
customer's use. This fact applies to the racks of compute and storage resources connected through a network
fabric deployed in Azure as well. HANA Large Instances infrastructure, like Azure, deploys different customer
"tenants" that are isolated from one another in the following three levels:
Network: Isolation through virtual networks within the HANA Large Instance stamp.
Storage: Isolation through storage virtual machines that have storage volumes assigned and isolate storage
volumes between tenants.
Compute: Dedicated assignment of server units to a single tenant. No hard or soft-partitioning of server units.
No sharing of a single server or host unit between tenants.
As such, the deployments of HANA Large Instances units between different tenants are not visible to each other.
Nor can HANA Large Instance Units deployed in different tenants communicate directly with each other on the
HANA Large Instance stamp level. Only HANA Large Instance Units within one tenant can communicate to each
other on the HANA Large Instance stamp level. A deployed tenant in the Large Instance stamp is assigned billing
wise to one Azure subscription. However, network wise it can be accessed from Azure VNets of other Azure
subscriptions within the same Azure enrollment. If you deploy with another Azure subscription in the same Azure
region, you also can choose to ask for a separated HANA Large Instance tenant.
There are significant differences between running SAP HANA on HANA Large Instances and SAP HANA running on
Azure VMs deployed in Azure:
There is no virtualization layer for SAP HANA on Azure (Large Instances). You get the performance of the
underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a specific customer. There is no
possibility that a server unit or host is hard or soft-partitioned. As a result, a HANA Large Instance unit is used
as assigned as a whole to a tenant and with that to you as a customer. A reboot or shutdown of the server does
not lead automatically to the operating system and SAP HANA being deployed on another server. (For Type I
class SKUs, the only exception is if a server might encounter issues and redeployment needs to be performed
on another server.)
Unlike Azure, where host processor types are selected for the best price/performance ratio, the processor types
chosen for SAP HANA on Azure (Large Instances) are the highest performing of the Intel E7v3 and E7v4
processor line.
Running multiple SAP HANA instances on one HANA Large Instance unit
It is possible to host more than one active SAP HANA instance on the HANA Large Instance units. In order to still
provide the capabilities of Storage Snapshots and Disaster recovery, such a configuration requires a volume set per
instance. As of now, the HANA Large Instance units can be subdivided as follows:
S72, S72m, S144, S192: In increments of 256 GB with 256 GB the smallest starting unit. Different increments
like 256 GB, 512 GB, and so on, can be combined to the maximum of the memory of the unit.
S144m and S192m: In increments of 256 GB with 512 GB the smallest unit. Different increments like 512 GB,
768 GB, and so on, can be combined to the maximum of the memory of the unit.
Type II class: In increments of 512 GB with the smallest starting unit of 2 TB. Different increments like 512 GB, 1
TB, 1.5 TB, and so on, can be combined to the maximum of the memory of the unit.
Some examples of running multiple SAP HANA instances could look like:

SIZES WITH MULTIPLE


SKU MEMORY SIZE STORAGE SIZE DATABASES

S72 768 GB 3 TB 1x768 GB HANA Instance


or 1x512 GB Instance +
1x256 GB Instance
or 3x256 GB Instances

S72m 1.5 TB 6 TB 3x512GB HANA Instances


or 1x512 GB Instance + 1x1
TB Instance
or 6x256 GB Instances
or 1x1.5 TB instance

S192m 4 TB 16 TB 8x512 GB Instances


or 4x1 TB Instances
or 4x512 GB Instances +
2x1 TB Instances
or 4x768 GB Instances +
2x512 GB Instances
or 1x4 TB Instance

S384xm 8 TB 22 TB 4x2 TB Instances


or 2x4 TB Instances
or 2x3 TB Instances + 1x2
TB Instances
or 2x2.5 TB Instances + 1x3
TB Instances
or 1x8 TB Instance

You get the idea. There certainly are other variations as well.
Using SAP HANA Data Tiering and Extension nodes
SAP supports a Data Tiering model for SAP BW of different SAP NetWeaver releases and SAP BW/4HANA. Details
regarding to the Data Tiering model can be found in this document and blog referenced in this document by SAP:
SAP BW/4HANA AND SAP BW ON HANA WITH SAP HANA EXTENSION NODES. With HANA Large Instances, you
can use option-1 configuration of SAP HANA Extension Nodes as detailed in this FAQ and SAP blog documents.
Option-2 configurations can be set up with the following HANA Large Instance SKUs: S72m, S192, S192m, S384,
and S384m.
Looking at the documentation the advantage might not be visible immediately. But looking into the SAP sizing
guidelines, you can see an advantage by using option-1 and option-2 SAP HANA extension nodes. Here an
example:
SAP HANA sizing guidelines usually require double the amount of data volume as memory. So, when you are
running your SAP HANA instance with the hot data, you only have 50% or less of the memory filled with data.
The remainder of the memory is ideally held for SAP HANA doing its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory, running an SAP BW database, you only
have 1 TB as data volume.
If you use an additional SAP HANA Extension Node of option-1, also a S192 HANA Large Instance SKU, it would
give you an additional 2 TB capacity for data volume. In the option-2 configuration even and additional 4 TB for
warm data volume. Compared to the hot node, the full memory capacity of the 'warm' extension node can be
used for data storing for option-1 and double the memory can be used for data volume in option-2 SAP HANA
extension node configuration.
As a result you end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for option-1 and 5 TB
of data and a 1:4 ratio in option-2 extension node configuration.
However, the higher the data volume compared to the memory, the higher the chances are that the warm data you
are asking for is stored on disk storage.

Operations model and responsibilities


The service provided with SAP HANA on Azure (Large Instances) is aligned with Azure IaaS services. You get an
HANA Large Instances instance with an installed operating system that is optimized for SAP HANA. As with Azure
IaaS VMs, most of the tasks of hardening the OS, installing additional software you need, installing HANA,
operating the OS and HANA, and updating the OS and HANA is your responsibility. Microsoft does not force OS
updates or HANA updates on you.

As you can see in the diagram above, SAP HANA on Azure (Large Instances) is a multi-tenant Infrastructure as a
Service offer. And as a result, the division of responsibility is at the OS-Infrastructure boundary, for the most part.
Microsoft is responsible for all aspects of the service below the line of the operating system and you are
responsible above the line, including the operating system. So most current on-premises methods you may be
employing for compliance, security, application management, basis, and OS management can continue to be used.
The systems appear as if they are in your network in all regards.
However, this service is optimized for SAP HANA, so there are areas where you and Microsoft need to work
together to use the underlying infrastructure capabilities for best results.
The following list provides more detail on each of the layers and your responsibilities:
Networking: All the internal networks for the Large Instance stamp running SAP HANA, its access to the storage,
connectivity between the instances (for scale-out and other functions), connectivity to the landscape, and
connectivity to Azure where the SAP application layer is hosted in Azure Virtual Machines. It also includes WAN
connectivity between Azure Data Centers for Disaster Recovery purposes replication. All networks are partitioned
by the tenant and have QOS applied.
Storage: The virtualized partitioned storage for all volumes needed by the SAP HANA servers, as well as for
snapshots.
Servers: The dedicated physical servers to run the SAP HANA DBs assigned to tenants. The servers of the Type I
class of SKUs are hardware abstracted. With these types of servers, the server configuration is collected and
maintained in profiles, which can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared a bit to Azure Service healing. The servers of the Type II
class SKUs are not offering such a capability.
SDDC: The management software that is used to manage data centers as software defined entities. It allows
Microsoft to pool resources for scale, availability, and performance reasons.
O/S: The OS you choose (SUSE Linux or Red Hat Linux) that is running on the servers. The OS images you are
provided are the images provided by the individual Linux vendor to Microsoft for the purpose of running SAP
HANA. You are required to have a subscription with the Linux vendor for the specific SAP HANA-optimized image.
Your responsibilities include registering the images with the OS vendor. From the point of handover by Microsoft,
you are also responsible for any further patching of the Linux operating system. This patching also includes
additional packages that might be necessary for a successful SAP HANA installation (refer to SAP's HANA
installation documentation and SAP Notes) and which have not been included by the specific Linux vendor in their
SAP HANA optimized OS images. The responsibility of the customer also includes patching of the OS that is related
to malfunction/optimization of the OS and its drivers related to the specific server hardware. Or any security or
functional patching of the OS. Customer's responsibility is as well monitoring and capacity-planning of:
CPU resource consumption
Memory consumption
Disk volumes related to free space, IOPS, and latency
Network volume traffic between HANA Large Instance and SAP application layer
The underlying infrastructure of HANA Large Instances provides functionality for backup and restore of the OS
volume. Using this functionality is also your responsibility.
Middleware: The SAP HANA Instance, primarily. Administration, operations, and monitoring are your
responsibility. There is functionality provided that enables you to use storage snapshots for backup/restore and
Disaster Recovery purposes. These capabilities are provided by the infrastructure. However, your responsibilities
also include designing High Availability or Disaster Recovery with these capabilities, leveraging them, and
monitoring that storage snapshots have been executed successfully.
Data: Your data managed by SAP HANA, and other data such as backups files located on volumes or file shares.
Your responsibilities include monitoring disk free space and managing the content on the volumes, and
monitoring the successful execution of backups of disk volumes and storage snapshots. However, successful
execution of data replication to DR sites is the responsibility of Microsoft.
Applications: The SAP application instances or, in case of non-SAP applications, the application layer of those
applications. Your responsibilities include deployment, administration, operations, and monitoring of those
applications related to capacity planning of CPU resource consumption, memory consumption, Azure Storage
consumption and network bandwidth consumption within Azure VNets, and from Azure VNets to SAP HANA on
Azure (Large Instances).
WANs: The connections you establish from on-premises to Azure deployments for workloads. All our customers
with HANA Large Instances use Azure ExpressRoute for connectivity. This connection is not part of the SAP HANA
on Azure (Large Instances) solution, so you are responsible for the setup of this connection.
Archive: You might prefer to archive copies of data using your own methods in storage accounts. Archiving
requires management, compliance, costs, and operations. You are responsible for generating archive copies and
backups on Azure, and storing them in a compliant way.
See the SLA for SAP HANA on Azure (Large Instances).

Sizing
Sizing for HANA Large Instances is no different than sizing for HANA in general. For existing and deployed
systems, you want to move from other RDBMS to HANA, SAP provides a number of reports that run on your
existing SAP systems. If the database is moved to HANA, these reports check the data and calculate memory
requirements for the HANA instance. Read the following SAP Notes to get more information on how to run these
reports, and how to obtain their most recent patches/versions:
SAP Note #1793345 - Sizing for SAP Suite on HANA
SAP Note #1872170 - Suite on HANA and S/4 HANA sizing report
SAP Note #2121330 - FAQ: SAP BW on HANA Sizing Report
SAP Note #1736976 - Sizing Report for BW on HANA
SAP Note #2296290 - New Sizing Report for BW on HANA
For green field implementations, SAP Quick Sizer is available to calculate memory requirements of the
implementation of SAP software on top of HANA.
Memory requirements for HANA are increasing as data volume grows, so you want to be aware of the memory
consumption now and be able to predict what it is going to be in the future. Based on the memory requirements,
you can then map your demand into one of the HANA Large Instance SKUs.

Requirements
This list assembles requirements for running SAP HANA on Azure (Larger Instances).
Microsoft Azure:
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier Support Contract. See SAP Support Note #2015553 SAP on Microsoft Azure: Support
Prerequisites for specific information related to running SAP in Azure. Using HANA large instance units with
384 and more CPUs, you also need to extend the Premier Support contract to include Azure Rapid Response
(ARR).
Awareness of the HANA large instances SKUs you need after performing a sizing exercise with SAP.
Network Connectivity:
Azure ExpressRoute between on-premises to Azure: To connect your on-premises datacenter to Azure, make
sure to order at least a 1 Gbps connection from your ISP.
Operating System:
Licenses for SUSE Linux Enterprise Server 12 for SAP Applications.

NOTE
The Operating System delivered by Microsoft is not registered with SUSE, nor is it connected with an SMT instance.

SUSE Linux Subscription Management Tool (SMT) deployed in Azure on an Azure VM. This provides the ability
for SAP HANA on Azure (Large Instances) to be registered and respectively updated by SUSE (as there is no
internet access within HANA Large Instances data center).
Licenses for Red Hat Enterprise Linux 6.7 or 7.2 for SAP HANA.

NOTE
The Operating System delivered by Microsoft is not registered with Red Hat, nor is it connected to a Red Hat Subscription
Manager Instance.

Red Hat Subscription Manager deployed in Azure on an Azure VM. The Red Hat Subscription Manager provides
the ability for SAP HANA on Azure (Large Instances) to be registered and respectively updated by Red Hat (as
there is no direct internet access from within the tenant deployed on the Azure Large Instance stamp).
SAP requires you to have a support contract with your Linux provider as well. This requirement is not erased by
the solution of HANA Large Instances or the fact that your run Linux in Azure. Unlike with some of the Linux
Azure gallery images, the service fee is NOT included in the solution offer of HANA Large Instances. It is on you
as a customer to fulfill the requirements of SAP regarding support contracts with the Linux distributor.
For SUSE Linux, look up the requirements of support contract in SAP Note #1984787 - SUSE LINUX
Enterprise Server 12: Installation notes and SAP Note #1056161 - SUSE Priority Support for SAP
applications.
For Red Hat Linux, you need to have the correct subscription levels that include support and service
(updates to the operating systems of HANA Large Instances. Red Hat recommends getting an "RHEL for
SAP Business Applications" subscription. Regarding support and services, check SAP Note #2002167 -
Red Hat Enterprise Linux 7.x: Installation and Upgrade and SAP Note #1496410 - Red Hat Enterprise
Linux 6.x: Installation and Upgrade for details.
Database:
Licenses and software installation components for SAP HANA (platform or enterprise edition).
Applications:
Licenses and software installation components for any SAP applications connecting to SAP HANA and related
SAP support contracts.
Licenses and software installation components for any non-SAP applications used in relation to SAP HANA on
Azure (Large Instances) environment and related support contracts.
Skills:
Experience and knowledge on Azure IaaS and its components.
Experience and knowledge on deploying SAP workload in Azure.
SAP HANA Installation certified personal.
SAP architect skills to design High Availability and Disaster Recovery around SAP HANA.
SAP:
Expectation is that you are an SAP customer and have a support contract with SAP
Especially for implementations on the Type II class of HANA Large Instance SKUs, it is highly recommended to
consult with SAP on versions of SAP HANA and eventual configurations on large sized scale-up hardware.

Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure Service
Management through SAP recommended guidelines, documented in the SAP HANA Storage Requirements white
paper.
The HANA Large Instances of the Type I class come with four times the memory volume as storage volume. For the
type II class of HANA Large Instance units, the storage is not going to be four times more. The units come with a
volume, which is intended for storing HANA transaction log backups. Find more details in How to install and
configure SAP HANA (large instances) on Azure
See the following table in terms of storage allocation. The table lists roughly the capacity for the different volumes
provided with the different HANA Large Instance units.

HANA LARGE INSTANCE


SKU HANA/DATA HANA/LOG HANA/SHARED HANA/LOG/BACKUP

S72 1280 GB 512 GB 768 GB 512 GB

S72m 3328 GB 768 GB 1280 GB 768 GB

S192 4608 GB 1024 GB 1536 GB 1024 GB

S192m 11,520 GB 1536 GB 1792 GB 1536 GB

S384 11,520 GB 1536 GB 1792 GB 1536 GB

S384m 12,000 GB 2050 GB 2050 GB 2040 GB

S384xm 16,000 GB 2050 GB 2050 GB 2040 GB

S576 20,000 GB 3100 GB 2050 GB 3100 GB

S768 28,000 GB 3100 GB 2050 GB 3100 GB

S960 36,000 GB 4100 GB 2050 GB 4100 GB

Actual deployed volumes may vary a bit based on deployment and tool that is used to show the volume sizes.
If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces would look like:

MEMORY PARTITION IN
GB HANA/DATA HANA/LOG HANA/SHARED HANA/LOG/BACKUP

256 400 GB 160 GB 304 GB 160 GB

512 768 GB 384 GB 512 GB 384 GB

768 1280 GB 512 GB 768 GB 512 GB

1024 1792 GB 640 GB 1024 GB 640 GB

1536 3328 GB 768 GB 1280 GB 768 GB


These sizes are rough volume numbers that can vary slightly based on deployment and tools used to look at the
volumes. There also are other partition sizes thinkable, like 2.5 TB. These storage sizes would be calculated with a
similar formula as used for the partitions above. The term 'partitions' does not indicate that the operating system,
memory, or CPU resources are in any way partitioned. It just indicates storage partitions for the different HANA
instances you might want to deploy on one single HANA Large Instance unit.
You as a customer might have need for more storage, you have the possibility to add storage to purchase
additional storage in 1 TB units. This additional storage can be added as additional volume or can be used to
extend one or more of the existing volumes. It is not possible to decrease the sizes of the volumes as originally
deployed and mostly documented by the table(s) above. It is also not possible to change the names of the volumes
or mount names. The storage volumes as described above are attached to the HANA Large Instance units as NFS4
volumes.
You as a customer can choose to use storage snapshots for backup/restore and disaster recovery purposes. More
details on this topic are detailed in SAP HANA (large instances) High Availability and Disaster Recovery on Azure.
Encryption of data at rest
The storage used for HANA Large Instances allows a transparent encryption of the data as it is stored on the disks.
At deployment time of a HANA Large Instance Unit, you have the option to have this kind of encryption enabled.
You also can choose to change to encrypted volumes after the deployment already. The move from non-encrypted
to encrypted volumes is transparent and does not require a downtime.
With the Type I class SKUs, the volume the boot LUN is stored on, is encrypted. In case of the type II class of SKUs
of HANA Large Instances, you need to encrypt the boot LUN with OS methods.

Networking
The architecture of Azure Networking is a key component to successful deployment of SAP applications on HANA
Large Instances. Typically, SAP HANA on Azure (Large Instances) deployments have a larger SAP landscape with
several different SAP solutions with varying sizes of databases, CPU resource consumption, and memory
utilization. Likely not all of those SAP systems are based on SAP HANA, so your SAP landscape would probably be
a hybrid that uses:
Deployed SAP systems on-premises. Due to their sizes, these systems cannot currently be hosted in Azure; a
classic example would be a production SAP ERP system running on Microsoft SQL Server (as the database)
which requires more CPU or memory resources Azure VMs can provide.
Deployed SAP HANA-based SAP systems on-premises.
Deployed SAP systems in Azure VMs. These systems could be development, testing, sandbox, or production
instances for any of the SAP NetWeaver-based applications that can successfully deploy in Azure (on VMs),
based on resource consumption and memory demand. These systems also could be based on databases like
SQL Server (see SAP Support Note #1928533 SAP Applications on Azure: Supported Products and Azure VM
types) or SAP HANA (see SAP HANA Certified IaaS Platforms).
Deployed SAP application servers in Azure (on VMs) that leverage SAP HANA on Azure (Large Instance) in
Azure Large Instance stamps.
While a hybrid SAP landscape (with four or more different deployment scenarios) is typical, there are many
customer cases of complete SAP landscape running in Azure. As Microsoft Azure VMs are becoming more
powerful, the number of customers moving all their SAP solutions on Azure is increasing.
Azure networking in the context of SAP systems deployed in Azure is not complicated. It is based on the following
principles:
Azure Virtual Networks (VNets) need to be connected to the Azure ExpressRoute circuit that connects to on-
premises network.
An ExpressRoute circuit connecting on-premise to Azure usually should have a bandwidth of 1 Gbps or higher.
This minimal bandwidth allows adequate bandwidth for transferring data between on-premises systems and
systems running on Azure VMs (as well as connection to Azure systems from end users on-premises).
All SAP systems in Azure need to be set up in Azure VNets to communicate with each other.
Active Directory and DNS hosted on-premises are extended into Azure through ExpressRoute from on-premise.

NOTE
From a billing point of view, only a single Azure subscription can be linked only to one single tenant in a Large Instance
stamp in a specific Azure region, and conversely a single Large Instance stamp tenant can be linked only to one Azure
subscription. This fact is not different to any other billable objects in Azure

Deploying SAP HANA on Azure (Large Instances) in multiple different Azure regions, results in a separate tenant to
be deployed in the Large Instance stamp. However, you can run both under the same Azure subscription as long as
these instances are part of the same SAP landscape.

IMPORTANT
Only Azure Resource Management deployment is supported with SAP HANA on Azure (Large Instances).

Additional Azure VNet information


In order to connect an Azure VNet to ExpressRoute, an Azure gateway must be created (see About virtual network
gateways for ExpressRoute). An Azure gateway can be used either with ExpressRoute to an infrastructure outside of
Azure (or to an Azure Large instance stamp), or to connect between Azure VNets (see Configure a VNet-to-VNet
connection for Resource Manager using PowerShell). You can connect the Azure gateway to a maximum of four
different ExpressRoute connections as long as those connections are coming from different MS Enterprise Edges
(MSEE) routers. For more information, see SAP HANA (large instances) Infrastructure and connectivity on Azure.

NOTE
The throughput an Azure gateway provides is different for both use cases (see About VPN Gateway). The maximum
throughput we can achieve with a VNet gateway is 10 Gbps, using an ExpressRoute connection. Keep in mind that copying
files between an Azure VM residing in an Azure VNet and a system on-premises (as a single copy stream) does not achieve
the full throughput of the different gateway SKUs. To leverage the complete bandwidth of the VNet gateway, you must use
multiple streams, or copy different files in parallel streams of a single file.

Networking Architecture for HANA Large Instances


The networking architecture for HANA Large Instances as shown below, can be separated in four different parts:
On-premise networking and ExpressRoute connection to Azure. This part is the customers domain and
connected to Azure through ExpressRoute. This is the part in the lower right of the graphics below.
Azure Networking as briefly discussed above with Azure VNets, which again have gateways. This is an area
where you need to find the appropriate designs for your applications requirements, security, and compliance
requirements. Using HANA Large Instances is another point of consideration in terms of number of VNets and
Azure gateway SKUs to choose from. This is the part in the upper right of the graphics.
Connectivity of HANA Large Instances through ExpressRoute technology into Azure. This part is deployed and
handled by Microsoft. All you need to do as a customer is to provide some IP address ranges and after the
deployment of your assets in HANA Large Instances connecting the ExpressRoute circuit to the Azure VNet(s)
(see SAP HANA (large instances) Infrastructure and connectivity on Azure).
Networking in HANA Large Instances, which is mostly transparent for you as a customer.
The fact that you use HANA Large instances does not change the requirement to get your on-premise assets
connected through ExpressRoute to Azure. It also does not change the requirement for having one or multiple
VNets that run the Azure VMs which host the application layer that connects to the HANA instances hosted in
HANA Large Instance units.
The difference to SAP deployments in pure Azure, comes with the facts that:
The HANA Large Instance units of your customer tenant are connected through another ExpressRoute circuit
into your Azure VNet(s). In order to separate load conditions, the on-premise to Azure VNets ExpressRoute links
and links between Azure VNets and HANA Large instances do not share the same routers.
The workload profile between the SAP application layer and the HANA Instance is a different nature of many
small requests and burst like data transfers (result sets) from SAP HANA into the application layer.
The SAP application architecture is more sensitive to network latency than typical scenarios where data gets
exchanged between on-premise and Azure.
The VNet gateway has at least two ExpressRoute connections, and both connections share the maximum
bandwidth for incoming data of the VNet gateway.
The network latency experienced between Azure VMs and HANA Large instance units can be higher than a typical
VM-to-VM network round-trip latency. Dependent on the Azure region, the values measured can exceed the 0.7 ms
round-trip latency classified as below average in SAP Note #1100926 - FAQ: Network performance. Nevertheless
customers deployed SAP HANA-based production SAP applications very successfully on SAP HANA Large
Instances. The customers who deployed reported great improvements by running their SAP applications on SAP
HANA using HANA Large Instance units. Nevertheless you should test your business processes thoroughly in
Azure HANA Large Instances.
In order to provide deterministic network latency between Azure VMs and HANA Large Instance, the choice of the
Azure VNet Gateway SKU is essential. Unlike the traffic patterns between on-premise and Azure VMs, the traffic
pattern between Azure VMs and HANA Large Instances can develop small but high bursts of requests and data
volumes to be transmitted. In order to have such bursts handled well, we highly recommend the usage of the
UltraPerformance Gateway SKU. For the Type II class of HANA Large Instance SKUs, the usage of the
UltraPerformance gateway SKU as Azure VNet Gateway is mandatory.

IMPORTANT
Given the overall network traffic between the SAP application and database layers, only the HighPerformance or
UltraPerformance gateway SKUs for VNets is supported for connecting to SAP HANA on Azure (Large Instances). For HANA
Large instance Type II SKUs, only the UltraPerformance Gateway SKU is supported as Azure VNet Gateway.

Single SAP system


The on-premises infrastructure shown above is connected through ExpressRoute into Azure, and the ExpressRoute
circuit connects into a Microsoft Enterprise Edge Router (MSEE) (see ExpressRoute technical overview). Once
established, that route connects into the Microsoft Azure backbone, and all Azure regions are accessible.

NOTE
For running SAP landscapes in Azure, connect to the MSEE closest to the Azure region in the SAP landscape. Azure Large
Instance stamps are connected through dedicated MSEE devices to minimize network latency between Azure VMs in Azure
IaaS and Large Instance stamps.

The VNet gateway for the Azure VMs, hosting SAP application instances, is connected to that ExpressRoute circuit,
and the same VNet is connected to a separate MSEE Router dedicated to connecting to Large Instance stamps.
This is a straightforward example of a single SAP system, where the SAP application layer is hosted in Azure and
the SAP HANA database runs on SAP HANA on Azure (Large Instances). The assumption is that the VNet gateway
bandwidth of 2 Gbps or 10 Gbps throughput does not represent a bottleneck.
Multiple SAP systems or large SAP systems
If multiple SAP systems or large SAP systems are deployed connecting to SAP HANA on Azure (Large Instances),
it's reasonable to assume the throughput of the VNet gateway may become a bottleneck. In such a case, you need
to split the application layers into multiple Azure VNets. It also might be recommendable to create special VNets
that connect to HANA Large Instances for cases like:
Performing backups directly from the HANA Instances in HANA Large Instances to a VM in Azure that hosts
NFS shares
Copying large backups or other files from HANA Large Instance units to disk space managed in Azure.
Using separate VNets that host VMs that manage the storage avoids impact by large file or data transfer from
HANA Large Instances to Azure on the VNet Gateway that serves the VMs running the SAP application layer.
For a more scalable network architecture:
Leverage multiple Azure VNets for a single, larger SAP application layer.
Deploy one separate Azure VNet for each SAP system deployed, compared to combining these SAP systems
in separate subnets under the same VNet.
A more scalable networking architecture for SAP HANA on Azure (Large Instances):
Deploying the SAP application layer, or components, over multiple Azure VNets as shown above, introduced
unavoidable latency overhead that occurred during communication between the applications hosted in those
Azure VNets. By default, the network traffic between Azure VMs located in different VNets route through the MSEE
routers in this configuration. However, since September 2016 this routing can be optimized. The way to optimize
and cut down the latency in communication between two VNets is by peering Azure VNets within the same region.
Even if those VNets are in different subscriptions. Using Azure VNet peering, the communication between VMs in
two different Azure VNets can use the Azure network backbone to directly communicate with each other. Thereby
showing similar latency as if the VMs would be in the same VNet. Whereas traffic, addressing IP address ranges
that are connected through the Azure VNet gateway, is routed through the individual VNet gateway of the VNet.
You can get details about Azure VNet peering in the article VNet peering.
Routing in Azure
There are two important network routing considerations for SAP HANA on Azure (Large Instances):
1. SAP HANA on Azure (Large Instances) can only be accessed by Azure VMs in the dedicated ExpressRoute
connection; not directly from on-premises. Some administration clients and any applications needing direct
access, such as SAP Solution Manager running on-premises, cannot connect to the SAP HANA database.
2. SAP HANA on Azure (Large Instances) units have an assigned IP address from the Server IP Pool address
range you as the customer submitted (see SAP HANA (large instances) Infrastructure and connectivity on
Azure for details). This IP address is accessible through the Azure subscriptions and ExpressRoute that
connects Azure VNets to HANA on Azure (Large Instances). The IP address assigned out of that Server IP
Pool address range is directly assigned to the hardware unit and is NOT NAT'ed anymore as this was the
case in the first deployments of this solution.

NOTE
If you need to connect to SAP HANA on Azure (Large Instances) in a data warehouse scenario, where applications and/or
end users need to connect to the SAP HANA database (running directly), another networking component must be used: a
reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with Traffic Manager deployed in Azure as a virtual
firewall/traffic routing solution.

Internet connectivity of HANA Large Instances


HANA Large Instances do NOT have direct internet connectivity. This is restricting your abilities to, for example
register the OS image directly with the OS vendor. Hence you might need to work with local SLES SMT server or
RHEL Subscription Manager
Data encryption between Azure VMs and HANA Large Instances
Data transferred between HANA Large Instances and Azure VMs is not encrypted. However, purely for the
exchange between the HANA DBMS side and JDBC/ODBC based applications you can enable encryption of traffic.
Reference this documentation by SAP
Using HANA Large Instance Units in multiple regions
You might have other reasons to deploy SAP HANA on Azure (Large Instances) in multiple Azure regions, besides
disaster recovery. Perhaps you want to access HANA Large Instances from each of the VMs deployed in the
different VNets in the regions. Since the IP addresses assigned to the different HANA Large Instances units are not
propagated beyond the Azure VNets (that are directly connected through their gateway to the instances), there is a
slight change to the VNet design introduced above: an Azure VNet gateway can handle four different ExpressRoute
circuits out of different MSEEs, and each VNet that is connected to one of the Large Instance stamps can be
connected to the Large Instance stamp in another Azure region.
The above figure shows how the different Azure VNets in both regions are connected to two different
ExpressRoute circuits that are used to connect to SAP HANA on Azure (Large Instances) in both Azure regions. The
newly introduced connections are the rectangular red lines. With these connections, out of the Azure VNets, the
VMs running in one of those VNets can access each of the different HANA Large Instances units deployed in the
two regions. As you see in the graphics above, it is assumed that you have two ExpressRoute connections from on-
premises to the two Azure regions; recommended for Disaster Recovery reasons.

IMPORTANT
If multiple ExpressRoute circuits are used, AS Path prepending and Local Preference BGP settings should be used to ensure
proper routing of traffic.
SAP HANA (large instances) infrastructure and
connectivity on Azure
8/11/2017 23 min to read Edit Online

Some definitions upfront before reading this guide. In SAP HANA (large instances) overview and architecture on
Azure we introduced two different classes of HANA Large Instance units with:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
The class specifiers are going to be used throughout the HANA Large Instance documentation to eventually refer
to different capabilities and requirements based on HANA Large Instance SKUs.
Other definitions we use frequently are:
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this
technical deployment guide.
After the purchase of SAP HANA on Azure (Large Instances) is finalized between you and the Microsoft enterprise
account team, the following information is required by Microsoft to deploy HANA Large Instance Units:
Customer name
Business contact information (including e-mail address and phone number)
Technical contact information (including e-mail address and phone number)
Technical networking contact information (including e-mail address and phone number)
Azure deployment region (West US, East US, Australia East, Australia Southeast, West Europe, and North
Europe as of July
2017)
Confirm SAP HANA on Azure (Large Instances) SKU (configuration)
As already detailed in the Overview and Architecture document for HANA Large Instances, for every Azure
Region being deployed to:
A /29 IP address range for ER-P2P Connections that connect Azure VNets to HANA Large Instances
A /24 CIDR Block used for the HANA Large Instances Server IP Pool
The IP address range values used in the VNet Address Space attribute of every Azure VNet that connects to the
HANA Large Instances
Data for each of HANA Large Instances system:
Desired hostname - ideally with fully qualified domain name.
Desired IP address for the HANA Large Instance unit out of the Server IP Pool address range - Keep in
mind that the first 30 IP addresses in the Server IP Pool address range are reserved for internal usage
within HANA Large Instances
SAP HANA SID name for the SAP HANA instance (required to create the necessary SAP HANA-related
disk volumes). The HANA SID is required for creating the permissions for on the NFS volumes, which
are getting attached to the HANA Large Instance unit. It also is used as one of the name components of
the disk volumes that get mounted. If you want to run more than one HANA instance on the unit, you
need to list multiple HANA SIDs. Each one gets a separate set of volumes assigned.
The groupid the hana-sidadm user has in the Linux OS is required to create the necessary SAP HANA-
related disk volumes. The SAP HANA installation usually creates the sapsys group with a group id of
1001. The hana-sidadm user is part of that group
The userid the hana-sidadm user has in the Linux OS is required to create the necessary SAP HANA-
related disk volumes. If you are running multiple HANA instances on the unit, you need to list all the
adm users
Azure subscription ID for the Azure subscription to which SAP HANA on Azure HANA Large Instances are going
to be directly connected. This subscription ID references the Azure subscription, which is going to be charged
with the HANA Large Instance unit(s).
After you provide the information, Microsoft provisions SAP HANA on Azure (Large Instances) and will return the
information necessary to link your Azure VNets to HANA Large Instances and to access the HANA Large Instance
units.

Connecting Azure VMs to HANA Large Instances


As already mentioned in SAP HANA (large instances) overview and architecture on Azure the minimal deployment
of HANA Large Instances with the SAP application layer in Azure looks like:

Looking closer on the Azure VNet side, we realize the need for:
The definition of an Azure VNet that is going to be used to deploy the VMs of the SAP application layer into.
That automatically means that a default subnet in the Azure Vnet is defined that is really the one used to deploy
the VMs into.
The Azure VNet that's created needs to have at least one VM subnet and one ExpressRoute Gateway subnet.
These subnets should be assigned the IP address ranges as specified and discussed in the following sections.
So, let's look a bit closer into the Azure VNet creation for HANA Large Instances
Creating the Azure VNet for HANA Large Instances

NOTE
The Azure VNet for HANA Large Instance must be created using the Azure Resource Manager deployment model. The older
Azure deployment model, commonly known as classic deployment model, is not supported with the HANA Large Instance
solution.

The VNet can be created using the Azure portal, PowerShell, Azure template, or Azure CLI (see Create a virtual
network using the Azure portal). In the following example, we look into a VNet created through the Azure portal.
If we look into the definitions of an Azure VNet through the Azure portal, let's look into some of the definitions
and how those relate to what we list of different IP address ranges. As we are talking about the Address Space,
we mean the address space that the Azure VNet is allowed to use. This address space is also the address range
that the VNet uses for BGP route propagation. This Address Space can be seen here:

In the case preceding, with 10.16.0.0/16, the Azure VNet was given a rather large and wide IP address range to
use. Means all the IP address ranges of subsequent subnets within this VNet can have their ranges within that
'Address Space'. Usually we are not recommending such a large address range for single VNet in Azure. But
getting a step further, let's look into the subnets defined in the Azure VNet:

As you can see, we look at a VNet with a first VM subnet (here called 'default') and a subnet called
'GatewaySubnet'. In the following section, we refer to the IP address range of the subnet, which was called
'default' in the graphics as Azure VM subnet IP address range. In the following sections, we refer to the IP
address range of the Gateway Subnet as VNet Gateway Subnet IP address range.
In the case demonstrated by the two graphics above, you see that the VNet Address Space covers both, the
Azure VM subnet IP address range and the VNet Gateway Subnet IP address range.
In other cases where you need to conserve or be specific with your IP address ranges, you can restrict the VNet
Address Space of a VNet to the specific ranges being used by each subnet. In this case, you can define the VNet
Address Space of a VNet as multiple specific ranges as shown here:

In this case, the VNet Address Space has two spaces defined. These two spaces, are identical to the IP address
ranges defined for Azure VM subnet IP address range and the VNet Gateway Subnet IP address range.
You can use any naming standard you like for these tenant subnets (VM subnets). However, there must always
be one, and only one, gateway subnet for each VNet that connects to the SAP HANA on Azure (Large
Instances) ExpressRoute circuit. And this gateway subnet must always be named "GatewaySubnet" to
ensure proper placement of the ExpressRoute gateway.

WARNING
It is critical that the gateway subnet always is named "GatewaySubnet."

Multiple VM subnets may be used, even utilizing non-contiguous address ranges. But as mentioned previously,
these address ranges must be covered by the VNet Address Space of the VNet either in aggregated form or in a
list of the exact ranges of the VM subnets and the gateway subnet.
Summarizing the important fact about an Azure VNet that connects to HANA Large Instances:
You need to submit to Microsoft the VNet Address Space when performing an initial deployment of HANA
Large Instances.
The VNet Address Space can be one larger range that covers the range for Azure VM subnet IP address
range(s) and the VNet Gateway Subnet IP address range.
Or you can submit as VNet Address Space multiple ranges that cover the different IP address ranges of VM
subnet IP address range(s) and the VNet Gateway Subnet IP address range.
The defined VNet Address Space is used BGP routing propagation.
The name of the Gateway subnet must be: "GatewaySubnet."
The VNet Address Space is used as a filter on the HANA Large Instance side to allow or disallow traffic to the
HANA Large Instance units from Azure. If the BGP routing information of the Azure VNet and the IP address
ranges configured for filtering on the HANA Large Instance side do not match, issues in connectivity can arise.
There are some details about the Gateway subnet that are discussed further down in Section 'Connecting a
VNet to HANA Large Instance ExpressRoute'
Different IP address ranges to be defined
We already introduced some of the IP address ranges necessary to deploy HANA Large Instances in earlier
sections. But there are some more IP address ranges, which are important. Let's go through some further details.
The following IP addresses of which not all need to be submitted to Microsoft need to be defined, before sending a
request for initial deployment:
VNet Address Space: As already introduced earlier, is or are the IP address range(s) you have assigned (or
plan to assign) to your address space parameter in the Azure Virtual Network(s) (VNet) connecting to the
SAP HANA Large Instance environment. It is recommended that this Address Space parameter is a multi-
line value comprised of the Azure VM Subnet range(s) and the Azure Gateway subnet range as shown in
the graphics earlier. This range must NOT overlap with your on-premise or Server IP Pool or ER-P2P
address ranges. How to get this or these IP address range(s)? Your corporate network team or service
provider should provide one or multiple IP Address Range(s), which is or are not used inside your network.
Example: If your Azure VM Subnet (see earlier) is 10.0.1.0/24, and your Azure Gateway Subnet (see
following) is 10.0.2.0/28, then your Azure VNet Address Space is recommended to be two lines; 10.0.1.0/24
and 10.0.2.0/28. Although the Address Space values can be aggregated, it is recommended matching them
to the subnet ranges to avoid accidental reuse of unused IP address ranges within larger address spaces in
the future elsewhere in your network. The VNET Address Space is an IP address range, which needs to
be submitted to Microsoft when asking for an initial deployment
Azure VM subnet IP address range: This IP address range, as discussed earlier already, is the one you
have assigned (or plan to assign) to the Azure VNet subnet parameter in your Azure VNET connecting to
the SAP HANA Large Instance environment. This IP address range is used to assign IP addresses to your
Azure VMs. The IP addresses out of this range are allowed to connect to your SAP HANA Large Instance
server(s). If needed, multiple Azure VM subnets may be used. A /24 CIDR block is recommended by
Microsoft for each Azure VM Subnet. This address range must be a part of the values used in the Azure
VNet Address Space. How to get this IP address range? Your corporate network team or service provider
should provide an IP Address Range, which is not currently used inside your network.
VNet Gateway Subnet IP address range: Depending on the features you plan to use, the recommended
size would be:
Ultra-performance ExpressRoute gateway: /26 address block - required for Type II class of SKUs
Co-existence with VPN and ExpressRoute using a High-performance ExpressRoute Gateway (or smaller):
/27 address block
All other situations: /28 address block. This address range must be a part of the values used in the VNet
Address Space values. This address range must be a part of the values used in the Azure VNet Address
Space values that you need to submit to Microsoft. How to get this IP address range? Your corporate
network team or service provider should provide an IP Address Range, which is not currently used
inside your network.
Address range for ER-P2P connectivity: This range is the IP range for your SAP HANA Large Instance
ExpressRoute (ER) P2P connection. This range of IP addresses must be a /29 CIDR IP address range. This
range must NOT overlap with your on-premise or other Azure IP address ranges. This IP address range is
used to set up the ER connectivity from your Azure VNet ExpressRoute Gateway to the SAP HANA Large
Instance servers. How to get this IP address range? Your corporate network team or service provider should
provide an IP Address Range, which is not currently used inside your network. This range is an IP address
range, which needs to be submitted to Microsoft when asking for an initial deployment
Server IP Pool Address Range: This IP address range is used to assign the individual IP address to HANA
large instance servers. The recommended subnet size is a /24 CIDR block - but if needed it can be smaller
to a minimum of providing 64 IP addresses. From this range, the first 30 IP addresses are reserved for use
by Microsoft. Ensure this fact is accounted for when choosing the size of the range. This range must NOT
overlap with your on-premise or other Azure IP addresses. How to get this IP address range? Your
corporate network team or service provider should provide an IP Address Range which is not currently
used inside your network. A /24 (recommended) unique CIDR block to be used for assigning the specific IP
addresses needed for SAP HANA on Azure (Large Instances). This range is an IP address range, which
needs to be submitted to Microsoft when asking for an initial deployment
Though you need to define and plan the IP address ranges above, not all them need to be transmitted to
Microsoft. To summarize the above, the IP address ranges you are required to name to Microsoft are:
Azure VNet Address Space(s)
Address range for ER-P2P connectivity
Server IP Pool Address Range
Adding additional VNets that need to connect to HANA Large Instances, requires you to submit the new Azure
VNet Address Space you're adding to Microsoft.
Following is an example of the different ranges and some example ranges as you need to configure and eventually
provide to Microsoft. As you can see, the value for the Azure VNet Address Space is not aggregated in the first
example, but is defined from the ranges of the first Azure VM subnet IP address range and the VNet Gateway
Subnet IP address range. Using multiple VM subnets within the Azure VNet would work accordingly by
configuring and submitting the additional IP address ranges of the additional VM subnet(s) as part of the Azure
VNet Address Space.

You also have the possibility of aggregating the data you submit to Microsoft. In that case, the Address Space of
the Azure VNet only would include one space. Using the IP address ranges used in the example earlier. This
aggregated VNet Address space could look like:

As you can see above, instead of two smaller ranges that defined the address space of the Azure VNet, we have
one larger range that covers 4096 IP addresses. Such a large definition of the Address Space leaves some rather
large ranges unused. Since the VNet Address Space value(s) are used for BGP route propagation, usage of the
unused ranges on-premises or elsewhere in your network can cause routing issues. So it's recommended to keep
the Address Space tightly aligned with the actual subnet address space used. If needed, without incurring
downtime on the VNet, you can always add new Address Space values later.
IMPORTANT
Each IP address range of ER-P2P, Server IP Pool, Azure VNet Address Space must NOT overlap with each other or any other
range used somewhere else in your network; each must be discrete and as the two graphics earlier show, may not be a
subnet of any other range. If overlaps occur between ranges, the Azure VNet may not connect to the ExpressRoute circuit.

Next steps after address ranges have been defined


After the IP address ranges have been defined, the following activities need to happen:
1. Submit the IP address ranges for Azure VNet Address Space, the ER-P2P connectivity, and Server IP Pool
Address Range, together with other data that has been listed at the beginning of the document. At this point in
time, you also could start to create the VNet and the VM Subnets.
2. An Express Route circuit is created by Microsoft between your Azure subscription and the HANA Large Instance
stamp.
3. A tenant network is created on the Large Instance stamp by Microsoft.
4. Microsoft configures networking in the SAP HANA on Azure (Large Instances) infrastructure to accept IP
addresses from your Azure VNet Address Space that communicates with HANA Large Instances.
5. Depending on the specific SAP HANA on Azure (Large Instances) SKU purchased, Microsoft assigns a compute
unit in a tenant network, allocate and mount storage, and install the operating system (SUSE or Red Hat Linux).
IP addresses for these units are taken out of the Server IP Pool address Range you submitted to Microsoft.
At the end of the deployment process, Microsoft delivers the following data to you:
Information needed to connect your Azure VNet(s) to the ExpressRoute circuit that connects Azure VNets to
HANA Large Instances:
Authorization key(s)
ExpressRoute PeerID
Data to access HANA Large Instances after you established ExpressRoute circuit and Azure VNet.
You can also find the sequence of connecting HANA Large Instances in the document End to End Setup for SAP
HANA Large Instances. Many of the following steps are shown in an example deployment in that document.

Connecting a VNet to HANA Large Instance ExpressRoute


As you defined all the IP address ranges and now got the data back from Microsoft, you can start connecting the
VNet you created before to HANA Large Instances. Once the Azure VNet is created, an ExpressRoute gateway must
be created on the VNet to link the VNet to the ExpressRoute circuit that connects to the customer tenant on the
Large Instance stamp.

NOTE
This step can take up to 30 minutes to complete, as the new gateway is created in the designated Azure subscription and
then connected to the specified Azure VNet.

If a gateway already exists, check whether it is an ExpressRoute gateway or not. If not, the gateway must be
deleted and recreated as an ExpressRoute gateway. If an ExpressRoute gateway is already established, since the
Azure VNet is already connected to the ExpressRoute circuit for on-premises connectivity, proceed to the Linking
VNets section below.
Use either the (new) Azure portal, or PowerShell to create an ExpressRoute VPN gateway connected to your
VNet.
If you use the Azure portal, add a new Virtual Network Gateway and then select ExpressRoute as the
gateway type.
If you chose PowerShell instead, first download and use the latest Azure PowerShell SDK to ensure an
optimal experience. The following commands create an ExpressRoute gateway. The texts preceded by a
$ are user defined variables that need to be updated with your specific information.

# These Values should already exist, update to match your environment


$myAzureRegion = "eastus"
$myGroupName = "SAP-East-Coast"
$myVNetName = "VNet01"

# These values are used to create the gateway, update for how you wish the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "HighPerformance" # Supported values for HANA Large Instances are: HighPerformance or
UltraPerformance

# These Commands create the Public IP and ExpressRoute Gateway


$vnet = Get-AzureRmVirtualNetwork -Name $myVNetName -ResourceGroupName $myGroupName
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet
New-AzureRmPublicIpAddress -Name $myGWPIPName -ResourceGroupName $myGroupName `
-Location $myAzureRegion -AllocationMethod Dynamic
$gwpip = Get-AzureRmPublicIpAddress -Name $myGWPIPName -ResourceGroupName $myGroupName
$gwipconfig = New-AzureRmVirtualNetworkGatewayIpConfig -Name $myGWConfig -SubnetId $subnet.Id `
-PublicIpAddressId $gwpip.Id

New-AzureRmVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName -Location $myAzureRegion `


-IpConfigurations $gwipconfig -GatewayType ExpressRoute `
-GatewaySku $myGWSku -VpnType PolicyBased -EnableBgp $true

In this example, the HighPerformance gateway SKU was used. Your options are HighPerformance or
UltraPerformance as the only gateway SKUs that are supported for SAP HANA on Azure (Large Instances).

IMPORTANT
For HANA Large Instances of the SKU types S384, S384m, S384xm, S576, S768, and S960 (Type II class SKUs), the usage of
the UltraPerformance Gateway SKU is mandatory.

Linking VNets
Now that the Azure VNet has an ExpressRoute gateway, you use the authorization information provided by
Microsoft to connect the ExpressRoute gateway to the SAP HANA on Azure (Large Instances) ExpressRoute circuit
created for this connectivity. This step can be performed using the Azure portal or PowerShell. The portal is
recommended, however PowerShell instructions are as follows.
You execute the following commands for each VNet gateway using a different AuthGUID for each connection.
The first two entries shown in the script following come from the information provided by Microsoft. Also, the
AuthGUID is specific for every VNet and its gateway. Means, if you want to add another Azure VNet, you need
to get another AuthID for your ExpressRoute circuit that connects HANA Large Instances into Azure.
# Populate with information provided by Microsoft Onboarding team
$PeerID = "/subscriptions/9cb43037-9195-4420-a798-f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"

# Your ExpressRoute Gateway Information


$myGroupName = "SAP-East-Coast"
$myGWName = "VNet01GW"
$myGWLocation = "East US"

# Define the name for your connection


$myConnectionName = "VNet01GWConnection"

# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzureRmVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName

New-AzureRmVirtualNetworkGatewayConnection -Name $myConnectionName `


-ResourceGroupName $myGroupName -Location $myGWLocation -VirtualNetworkGateway1 $gw `
-PeerId $PeerID -ConnectionType ExpressRoute -AuthorizationKey $AuthGUID

If you want to connect the gateway to multiple ExpressRoute circuits that are associated with your subscription,
you may need to execute this step more than once. For example, you are likely going to connect the same VNet
Gateway to the ExpressRoute circuit that connects the VNet to your on-premise network.

Adding more IP addresses or subnets


Use either the Azure portal, PowerShell, or CLI when adding more IP addresses or subnets.
In this case, the recommendation is to add the new IP address range as new range to the VNet Address Space
instead of generating a new aggregated range. In either case, you need to submit this change to Microsoft to allow
connectivity out of that new IP address range to the HANA Large Instance units in your client. You can open an
Azure support request to get the new VNet Address space added. After you receive confirmation, perform the next
steps.
To create an additional subnet from the Azure portal, see the article Create a virtual network using the Azure
portal, and to create from PowerShell, see Create a virtual network using PowerShell.

Adding VNets
After initially connecting one or more Azure VNets, you might want to add additional ones that access SAP HANA
on Azure (Large Instances). First, submit an Azure support request, in that request include both the specific
information identifying the particular Azure deployment, and the IP address space range(s) of the Azure VNet
Address Space. SAP HANA on Azure Service Management then provides the necessary information you need to
connect the additional VNets and ExpressRoute. For every VNet, you need a unique Authorization Key to connect
to the ExpressRoute Circuit to HANA Large Instances.
Steps to add a new Azure VNet:
1. Complete the first step in the onboarding process, see the Creating Azure VNet section.
2. Complete the second step in the onboarding process, see the Creating gateway subnet section.
3. To connect your additional VNets to the HANA Large Instance ExpressRoute circuit, open an Azure support
request with information on the new VNet and request a new Authorization Key.
4. Once notified that the authorization is complete, use the Microsoft-provided authorization information to
complete the third step in the onboarding process, see the Linking VNets section.

Increasing ExpressRoute circuit bandwidth


Consult with SAP HANA on Azure Service Management. If you are advised to increase the bandwidth of the SAP
HANA on Azure (Large Instances) ExpressRoute circuit, create an Azure support request. (You can request an
increase for a single circuit bandwidth up to a maximum of 10 Gbps.) You then receive notification after the
operation is complete; no additional action needed to enable this higher speed in Azure.

Adding an additional ExpressRoute circuit


Consult with SAP HANA on Azure Service Management, if you are advised that an additional ExpressRoute circuit
is needed, make an Azure support request to create a new ExpressRoute circuit (and to get authorization
information to connect to it). The address space that is going be used on the VNets must be defined before
making the request, in order for SAP HANA on Azure Service Management to provide authorization.
Once the new circuit is created and the SAP HANA on Azure Service Management configuration is complete, you
are going to receive notification with the information you need to proceed. Follow the steps provided above for
creating and connecting the new VNet to this additional circuit. You are not able to connect Azure VNets to this
additional circuit if they already connected to another SAP HANA on Azure (Large Instance) ExpressRoute circuit in
the same Azure Region.

Deleting a subnet
To remove a VNet subnet, either the Azure portal, PowerShell, or CLI can be used. In case your Azure VNet IP
address range/Azure VNet Address Space was an aggregated range, there is no follow up for you with Microsoft.
Except that the VNet is still propagating BGP route address space that includes the deleted subnet. If you defined
the Azure VNet IP address range/Azure VNet Address Space as multiple IP address ranges of which one was
assigned to your deleted subnet, you should delete that out of your VNet Address Space and subsequently inform
SAP HANA on Azure Service Management to remove it from the ranges that SAP HANA on Azure (Large
Instances) is allowed to communicate with.
While there isn't yet specific, dedicated Azure.com guidance on removing subnets, the process for removing
subnets is the reverse of the process for adding them. See the article Create a virtual network using the Azure
portal for more information on creating subnets.

Deleting a VNet
Use either the Azure portal, PowerShell, or CLI when deleting a VNet. SAP HANA on Azure Service Management
removes the existing authorizations on the SAP HANA on Azure (Large Instances) ExpressRoute circuit and
remove the Azure VNet IP address range/Azure VNet Address Space for the communication with HANA Large
Instances.
Once the VNet has been removed, open an Azure support request to provide the IP address space range(s) to be
removed.
While there isn't yet specific, dedicated Azure.com guidance on removing VNets, the process for removing VNets
is the reverse of the process for adding them, which is described above. See the articles Create a virtual network
using the Azure portal and Create a virtual network using PowerShell for more information on creating VNets.
To ensure everything is removed, delete the following items:
The ExpressRoute connection, VNet Gateway, VNet Gateway Public IP and, VNet

Deleting an ExpressRoute circuit


To remove an additional SAP HANA on Azure (Large Instances) ExpressRoute circuit, open an Azure support
request with SAP HANA on Azure Service Management and request that the circuit should be deleted. Within the
Azure subscription, you may delete or keep the VNet as necessary. However, you must delete the connection
between the HANA Large Instances ExpressRoute circuit and the linked VNet gateway.
If you also want to remove a VNet, follow the guidance on Deleting a VNet in the section above.
How to install and configure SAP HANA (large
instances) on Azure
7/20/2017 24 min to read Edit Online

Following are some important definitions to know before you read this guide. In SAP HANA (large instances)
overview and architecture on Azure we introduced two different classes of HANA Large Instance units with:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
The class specifier is going to be used throughout the HANA Large Instance documentation to eventually refer to
different capabilities and requirements based on HANA Large Instance SKUs.
Other definitions we use frequently are:
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this technical
deployment guide.
The installation of SAP HANA is your responsibility and you can start the activity after handoff of a new SAP HANA
on Azure (Large Instances) server. And after the connectivity between your Azure VNet(s) and the HANA Large
Instance unit(s) got established.

NOTE
Per SAP policy, the installation of SAP HANA must be performed by a person certified to perform SAP HANA installations. A
person, who has passed the Certified SAP Technology Associate exam, SAP HANA Installation certification exam, or by an
SAP-certified system integrator (SI).

Check again, especially when planning to install HANA 2.0, SAP Support Note #2235581 - SAP HANA: Supported
Operating Systems in order to make sure that the OS is supported with the SAP HANA release you decided to
install. You realize that the supported OS for HANA 2.0 is more restricted than the OS supported for HANA 1.0.

First steps after receiving the HANA Large Instance Unit(s)


First Step after receiving the HANA Large Instance and having established access and connectivity to the instances,
is to register the OS of the instance with your OS provider. This step would include registering your SUSE Linux OS
in an instance of SUSE SMT that you need to have deployed in a VM in Azure. The HANA Large Instance unit can
connect to this SMT instance (see later in this documentation). Or your RedHat OS needs to be registered with the
Red Hat Subscription Manager you need to connect to. See also remarks in this document. This step also is
necessary to be able to patch the OS. A task that is in the responsibility of the customer. For SUSE, find
documentation to install and configure SMT here.
Second Step is to check for new patches and fixes of the specific OS release/version. Check whether the patch
level of the HANA Large Instance is on the latest state. Based on timing on OS patch/releases and changes to the
image Microsoft can deploy, there might be cases where the latest patches may not be included. Hence it is a
mandatory step after taking over a HANA Large Instance unit, to check whether patches relevant for security,
functionality, availability, and performance were released meanwhile by the particular Linux vendor and need to be
applied.
Third Step is to check out the relevant SAP Notes for installing and configuring SAP HANA on the specific OS
release/version. Due to changing recommendations or changes to SAP Notes or configurations that are dependent
on individual installation scenarios, Microsoft will not always be able to have a HANA Large Instance unit
configured perfectly. Hence it is mandatory for you as a customer, to read the SAP Notes related to SAP HANA on
your exact Linux release. Also check the configurations of the OS release/version necessary and apply the
configuration settings where not done already.
In specific, check the following parameters and eventually adjusted to:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
Starting with SLES12 SP1 and RHEL 7.2, these parameters must be set in a configuration file in the /etc/sysctl.d
directory. For example, a configuration file with the name 91-NetApp-HANA.conf must be created. For older SLES
and RHEL releases, these parameters must be set in/etc/sysctl.conf.
For all RHEL releases and starting with SLES12, the
sunrpc.tcp_slot_table_entries = 128
parameter must be set in/etc/modprobe.d/sunrpc-local.conf. If the file does not exist, it must first be created by
adding the following entry:
options sunrpc tcp_max_slot_table_entries=128
Fourth Step is to check the system time of your HANA Large Instance Unit. The instances are deployed with a
system time zone that represent the location of the Azure region the HANA Large Instance Stamp is located in. You
are free to change the system time or time zone of the instances you own. Doing so and ordering more instances
into your tenant, be prepared that you need to adapt the time zone of the newly delivered instances. Microsoft
operations have no insights into the system time zone you set up with the instances after the handover. Hence
newly deployed instances might not be set in the same time zone as the one you changed to. As a result, it is your
responsibility as customer to check and if necessary adapt the time zone of the instance(s) handed over.
Fifth Step is to check etc/hosts. As the blades get handed over, they have different IP addresses assigned for
different purposes (see next section). Check the etc/hosts file. In cases where units are added into an existing
tenant, don't expect to have etc/hosts of the newly deployed systems maintained correctly with the IP addresses of
earlier delivered systems. Hence it is on you as customer to check the correct settings so, that a newly deployed
instance can interact and resolve the names of earlier deployed units in your tenant.

Networking
We assume that you followed the recommendations in designing your Azure VNets and connecting those VNets to
the HANA Large Instances as described in these documents:
SAP HANA (large Instance) Overview and Architecture on Azure
SAP HANA (large instances) Infrastructure and connectivity on Azure
There are some details worth to mention about the networking of the single units. Every HANA Large Instance unit
comes with two or three IP addresses that are assigned to two or three NIC ports of the unit. Three IP addresses
are used in HANA scale-out configurations and the HANA System Replication scenario. One of the IP addresses
assigned to the NIC of the unit is out of the Server IP pool that was described in the SAP HANA (large Instance)
Overview and Architecture on Azure.
The distribution for units with two IP addresses assigned should look like:
eth0.xx should have an IP address assigned that is out of the Server IP Pool address range that you submitted to
Microsoft. This IP address shall be used for maintaining in /etc/hosts of the OS.
eth1.xx should have an IP address assigned that is used for communication to NFS. Therefore, these addresses
do NOT need to be maintained in etc/hosts in order to allow instance to instance traffic within the tenant.
For deployment cases of HANA System Replication or HANA scale-out, a blade configuration with two IP addresses
assigned is not suitable. If having two IP addresses assigned only and wanting to deploy such a configuration,
contact SAP HANA on Azure Service Management to get a third IP address in a third VLAN assigned. For HANA
Large Instance units having three IP addresses assigned on three NIC ports, the following usage rules apply:
eth0.xx should have an IP address assigned that is out of the Server IP Pool address range that you submitted to
Microsoft. Hence this IP address shall not be used for maintaining in /etc/hosts of the OS.
eth1.xx should have an IP address assigned that is used for communication to NFS storage. Hence this type of
addresses should not be maintained in etc/hosts.
eth2.xx should be exclusively used to be maintained in etc/hosts for communication between the different
instances. These addresses would also be the IP addresses that need to be maintained in scale-out HANA
configurations as IP addresses HANA uses for the inter-node configuration.

Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure Service
Management through SAP recommended guide lines as documented in SAP HANA Storage Requirements white
paper. The rough sizes of the different volumes with the different HANA Large Instances SKUs got documented in
SAP HANA (large Instance) Overview and Architecture on Azure.
The naming conventions of the storage volumes are listed in the following table:

STORAGE USAGE MOUNT NAME VOLUME NAME

HANA data /hana/data/SID/mnt0000 Storage


IP:/hana_data_SID_mnt00001_tenant_v
ol

HANA log /hana/log/SID/mnt0000 Storage


IP:/hana_log_SID_mnt00001_tenant_vol

HANA log backup /hana/log/backups Storage


IP:/hana_log_backups_SID_mnt00001_t
enant_vol

HANA shared /hana/shared/SID Storage


IP:/hana_shared_SID_mnt00001_tenant
_vol/shared

usr/sap /usr/sap/SID Storage


IP:/hana_shared_SID_mnt00001_tenant
_vol/usr_sap

Where SID = the HANA instance System ID


And tenant = an internal enumeration of operations when deploying a tenant.
As you can see, HANA shared and usr/sap are sharing the same volume. The nomenclature of the mountpoints
does include the System ID of the HANA instances as well as the mount number. In scale-up deployments there
only is one mount, like mnt00001. Whereas in scale-out deployment you would see as many mounts, as, you have
worker and master nodes. For the scale-out environment, data, log, log backup volumes are shared and attached to
each node in the scale-out configuration. For configurations running multiple SAP instances, a different set of
volumes is created and attached to the HAN Large Instance unit.
As you read the paper and look a HANA Large Instance unit, you realize that the units come with rather generous
disk volume for HANA/data and that we have a volume HANA/log/backup. The reason why we sized the
HANA/data so large is that the storage snapshots we offer for you as a customer are using the same disk volume.
It means the more storage snapshots you perform, the more space is consumed by snapshots in your assigned
storage volumes. The HANA/log/backup volume is not thought to be the volume to put database backups in. It is
sized to be used as backup volume for the HANA transaction log backups. In future versions of the storage
snapshot self service, we will target this specific volume to have more frequent snapshots. And with that more
frequent replications to the disaster recovery site if you desire to option-in for the disaster recovery functionality
provided by the HANA Large Instance infrastructure. See details in SAP HANA (large instances) High Availability
and Disaster Recovery on Azure
In addition to the storage provided, you can purchase additional storage capacity in 1 TB increments. This
additional storage can be added as new volumes to a HANA Large Instances.
During onboarding with SAP HANA on Azure Service Management, the customer specifies a User ID (UID) and
Group ID (GID) for the sidadm user and sapsys group (ex: 1000,500) It is necessary that during installation of the
SAP HANA system, these same values are used. As you want to deploy multiple HANA instances on a unit, you get
multiple sets of volumes (one set for each instance). As a result, at deployment time you need to define:
The SID of the different HANA instances (sidadm is derived out of it).
Memory sizes of the different HANA instances. Since the memory size per instances defines the size of the
volumes in each individual volume set.
Based on storage provider recommendations the following mount options are configured for all mounted volumes
(excludes boot LUN):
nfs rw, vers=4, hard, timeo=600, rsize=1048576, wsize=1048576, intr, noatime, lock 0 0
These mount points are configured in /etc/fstab like shown in the following graphics:

The output of the command df -h on a S72m HANA Large Instance unit would look like:
The storage controller and nodes in the Large Instance stamps are synchronized to NTP servers. With you
synchronizing the SAP HANA on Azure (Large Instances) units and Azure VMs against an NTP server, there should
be no significant time drift happening between the infrastructure and the compute units in Azure or Large Instance
stamps.
In order to optimize SAP HANA to the storage used underneath, you should also set the following SAP HANA
configuration parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the installation of the SAP HANA
database, as described in SAP Note #2267798 - Configuration of the SAP HANA Database
You also can configure the parameters after the SAP HANA database installation by using the hdbparam
framework.
With SAP HANA 2.0, the hdbparam framework has been deprecated. As a result the parameters must be set using
SQL commands. For details, see SAP Note #2399079: Elimination of hdbparam in HANA 2.

Operating system
Swap space of the delivered OS image is set to 2 GB according to the SAP Support Note #1999997 - FAQ: SAP
HANA Memory. Any different setting desired needs to be set by you as a customer.
SUSE Linux Enterprise Server 12 SP1 for SAP Applications is the distribution of Linux installed for SAP HANA on
Azure (Large Instances). This particular distribution provides SAP-specific capabilities "out of the box" (including
pre-set parameters for running SAP on SLES effectively).
See Resource Library/White Papers on the SUSE website and SAP on SUSE on the SAP Community Network (SCN)
for several useful resources related to deploying SAP HANA on SLES (including the set-up of High Availability,
security hardening specific to SAP operations, and more).
Additional and useful SAP on SUSE-related links:
SAP HANA on SUSE Linux Site
Best Practice for SAP: Enqueue Replication SAP NetWeaver on SUSE Linux Enterprise 12.
ClamSAP SLES Virus Protection for SAP (including SLES 12 for SAP Applications).
SAP Support Notes applicable to implementing SAP HANA on SLES 12:
SAP Support Note #1944799 SAP HANA Guidelines for SLES Operating System Installation.
SAP Support Note #2205917 SAP HANA DB Recommended OS Settings for SLES 12 for SAP Applications.
SAP Support Note #1984787 SUSE Linux Enterprise Server 12: Installation Notes.
SAP Support Note #171356 SAP Software on Linux: General Information.
SAP Support Note #1391070 Linux UUID Solutions.
Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on HANA Large Instances. Releases
of RHEL 6.7 and 7.2 are available.
Additional and useful SAP on Red Hat related links:
SAP HANA on Red Hat Linux Site.
SAP Support Notes applicable to implementing SAP HANA on Red Hat:
SAP Support Note #2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System.
SAP Support Note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7.
SAP Support Note #2247020 - SAP HANA DB: Recommended OS settings for RHEL 6.7.
SAP Support Note #1391070 Linux UUID Solutions.
SAP Support Note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES
11.
SAP Support Note #2397039 - FAQ: SAP on RHEL.
SAP Support Note #1496410 - Red Hat Enterprise Linux 6.x: Installation and Upgrade.
SAP Support Note #2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade.

Time synchronization
SAP applications built on the SAP NetWeaver architecture are sensitive on time differences for the various
components that comprise the SAP system. SAP ABAP short dumps with the error title of
ZDATE_LARGE_TIME_DIFF are likely familiar, as these short dumps appear when the system time of different
servers or VMs is drifting too far apart.
For SAP HANA on Azure (Large Instances), time synchronization done in Azure doesn't apply to the compute units
in the Large Instance stamps. This synchronization is not applicable for running SAP applications in native Azure
VMs, as Azure ensures a system's time is properly synchronized. As a result, a separate time server must be set up
that can be used by SAP application servers running on Azure VMs and the SAP HANA database instances running
on HANA Large Instances. The storage infrastructure in Large Instance stamps is time synchronized with NTP
servers.

Setting up SMT server for SUSE Linux


SAP HANA Large Instances don't have direct connectivity to the Internet. Hence it is not a straightforward process
to register such a unit with the OS provider and to download and apply patches. In the case of SUSE Linux, one
solution could be to set up an SMT server in an Azure VM. Whereas the Azure VM needs to be hosted in an Azure
VNet, which is connected to the HANA Large Instance. With such an SMT server, the HANA Large Instance unit
could register and download patches.
SUSE provides a larger guide on their Subscription Management Tool for SLES 12 SP2.
As precondition for the installation of an SMT server that fulfills the task for HANA Large Instance, you would need:
An Azure VNet that is connected to the HANA Large Instance ER circuit.
A SUSE account that is associated with an organization. Whereas the organization would need to have some
valid SUSE subscription.
Installation of SMT server on Azure VM
In this step, you install the SMT server in an Azure VM. The first measure is to log in to the SUSE Customer Center
As you are logged in, go to Organization--> Organization Credentials. In that section you should find the
credentials that are necessary to set up the SMT server.
The third step is to install a SUSE Linux VM in the Azure VNet. To deploy the VM, take a SLES 12 SP2 gallery image
of Azure. In the deployment process, don't define a DNS name and do not use static IP addresses as seen in this
screen shot
The deployed VM was a smaller VM and got the internal IP address in the Azure VNet of 10.34.1.4. Name of the
VM was smtserver. After the installation, the connectivity to the HANA Large instance unit(s) was checked.
Dependent on how you organized name resolution you might need to configure resolution of the HANA Large
Instance units in etc/hosts of the Azure VM. Add an additional disk to the VM that is going to be used to hold the
patches. The boot disk itself could be too small. In the case demonstrated, the disk got mounted to
/srv/www/htdocs as shown in the following screenshot. A 100 GB disk should suffice.

Log in to the HANA Large Instance unit(s), maintain /etc/hosts and check whether you can reach the Azure VM that
is supposed to run the SMT server over the network.
After this check is done successfully, you need to log in to the Azure VM that should run the SMT server. If you are
using putty to log in to the VM, you need to execute this sequence of commands in your bash window:

cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc

After executing these commands, restart your bash to activate the settings. Then start YAST.
In YAST, go to Software Maintenance and search for smt. Select smt, which switches automatically to yast2-smt as
shown below
Accept the selection for installation on the smtserver. Once installed, go to the SMT server configuration and enter
the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter your Azure VM
hostname as the SMT Server URL. In this demonstration, it was https://smtserver as displayed in the next graphics.

As next step, you need to test whether the connection to the SUSE Customer Center works. As you see in the
following graphics, in the demonstration case, it did work.
Once the SMT setup starts, you need to provide a database password. Since it is a new installation, you need to
define that password as shown in the next graphics.

The next interaction you have is when a certificate gets created. Go through the dialog as shown next and the step
should proceed.
There might be some minutes spent in the step of 'Run synchronization check' at the end of the configuration.
After the installation and configuration of the SMT server, you should find the directory repo under the mount
point /srv/www/htdocs/ plus some sub-directories under repo.
Restart the SMT server and its related services with these commands.

rcsmt restart
systemctl restart smt.service
systemctl restart apache2

Download of packages onto SMT server


After all the services are restarted, select the appropriate packages in SMT Management using Yast. The package
selection depends on the OS image of the HANA Large Instance server and not on the SLES release or version of
the VM running the SMT server. An example of the selection screen is shown below.

Once you are finished with the package selection, you need to start the initial copy of the select packages to the
SMT server you set up. This copy is triggered in the shell using the command smt-mirror as shown below
As you see above, the packages should get copied into the directories created under the mount point
/srv/www/htdocs. This process can take a while. Dependent on how many packages you select, it could take up to
one hour or more. As this process finishes, you need to move to the SMT client setup.
Set up the SMT client on HANA Large Instance units
The client(s) in this case are the HANA Large Instance units. The SMT server setup copied the script
clientSetup4SMT.sh into the Azure VM. Copy that script over to the HANA Large Instance unit you want to connect
to your SMT server. Start the script with the -h option and give it as parameter the name of your SMT server. In this
example smtserver.

There might be a scenario where the load of the certificate from the server by the client succeeded, but the
registration failed as shown below.
If the registration failed, read this SUSE support document and execute the steps described there.

IMPORTANT
As server name you need to provide the name of the VM, in this case smtserver, without the fully qualified domain name.
Just the VM name works.

After these steps have been executed, you need to execute the following command on the HANA Large Instance
unit

SUSEConnect cleanup

NOTE
In our tests we always had to wait a few minutes after that step. The immediate execution clientSetup4SMT.sh, after the
corrective measures described in the SUSE article, ended with messages that the certificate would not be valid yet. Waiting
for 5-10 minutes usually and executing clientSetup4SMT.sh ended in a successful client configuration.

If you ran into the issue that you needed to fix based on the steps of the SUSE article, you need to restart
clientSetup4SMT.sh on the HANA Large Instance unit again. Now it should finish successfully as shown below.
With this step, you configured the SMT client of the HANA Large Instance unit to connect against the SMT server
you installed in the Azure VM. You now can take 'zypper up' or 'zypper in' to install OS patches to HANA Large
Instances or install additional packages. It is understood that you only can get patches that you downloaded before
on the SMT server.

Example of an SAP HANA installation on HANA Large Instances


This section illustrates how to install SAP HANA on a HANA Large Instance unit. The start state we have look like:
You provided Microsoft all the data to deploy you an SAP HANA Large Instance.
You received the SAP HANA Large Instance from Microsoft.
You created an Azure VNet that is connected to your on-premise network.
You connected the ExpressRotue circuit for HANA Large Instances to the same Azure VNet.
You installed an Azure VM you use as a jump box for HANA Large Instances.
You made sure that you can connect from the jump box to your HANA Large Instance unit and vice versa.
You checked whether all the necessary packages and patches are installed.
You read the SAP notes and documentations regarding HANA installation on the OS you are using and made
sure that the HANA release of choice is supported on the OS release.
What is shown in the next sequences is the download of the HANA installation packages to the jump box VM, in
this case running on a Windows OS, the copy of the packages to the HANA Large Instance unit and the sequence of
the setup.
Download of the SAP HANA installation bits
Since the HANA Large Instance units don't have direct connectivity to the internet, you can't directly download the
installation packages from SAP to the HANA Large Instance VM. To overcome the missing direct internet
connectivity, you need the jump box. You download the packages to the jump box VM.
In order to download the HANA installation packages, you need an SAP S-user or other user, which allows you to
access the SAP Marketplace. Go through this sequence of screens after logging in:
Go to SAP Service Marketplace > Click Download Software > Installations and Upgrade >By Alphabetical Index
>Under H SAP HANA Platform Edition > SAP HANA Platform Edition 2.0 > Installation > Download the following
files

In the demonstration case, we downloaded SAP HANA 2.0 installation packages. On the Azure jump box VM, you
expand the self-extracting archives into the directory as shown below.

As the archives are extracted, copy the directory created by the extraction, in the case above 51052030, to the
HANA Large instance unit into the /hana/shared volume into a directory you created.

IMPORTANT
Do Not copy the installation packages into the root or boot LUN since space is limited and needs to be used by other
processes as well.

Install SAP HANA on the HANA Large Instance unit


In order to install SAP HANA, you need to log in as user root. Only root has enough permissions to install SAP
HANA. The first thing you need to do is to set permissions on the directory you copied over into /hana/shared. The
permissions need to set like

chmod R 744 <Installation bits folder>


If you want to install SAP HANA using the graphical setup, the gtk2 package needs to be installed on the HANA
Large Instances. Check whether it is installed with the command

rpm qa | grep gtk2

In further steps, we are demonstrating the SAP HANA setup with the graphical user interface. As next step, go into
the installation directory and navigate into the sub directory HDB_LCM_LINUX_X86_64. Start

./hdblcmgui

out of that directory. Now you are getting guided through a sequence of screens where you need to provide the
data for the installation. In the case demonstrated, we are installing the SAP HANA database server and the SAP
HANA client components. Therefore our selection is 'SAP HANA Database' as shown below

In the next screen, you choose the option 'Install New System'
After this step, you need to select between several additional components that can be installed additionally to the
SAP HANA database server.

For the purpose of this documentation, we chose the SAP HANA Client and the SAP HANA Studio. We also
installed a scale-up instance. hence in the next screen, you need to choose 'Single-Host System'
In the next screen, you need to provide some data

IMPORTANT
As HANA System ID (SID), you need to provide the same SID, as you provided Microsoft when you ordered the HANA Large
Instance deployment. Choosing a different SID makes the installation fail due to access permission problems on the different
volumes

As installation directory you use the /hana/shared directory. In the next step, you need to provide the locations for
the HANA data files and the HANA log files
NOTE
You should define as data and log files the volumes that came already with the mount points that contain the SID you chose
in the screen selection before this screen. If the SID does mismatch with the one you typed in, in the screen before, go back
and adjust the SID to the value you have on the mount points.

In the next step, review the host name and eventually correct it.

In the next step, you also need to retrieve data you gave to Microsoft when you ordered the HANA Large Instance
deployment.
IMPORTANT
You need to provide the same System User ID and ID of User Group as you provided Microsoft as you order the unit
deployment. If you fail to give the very same IDs, the installation of SAP HANA on the HANA Large Instance unit fails.

In the next two screens, which we are not showing in this documentation, you need to provide the password for
the SYSTEM user of the SAP HANA database and the password for the sapadm user, which is used for the SAP
Host Agent that gets installed as part of the SAP HANA database instance.
After defining the password, a confirmation screen is showing up. check all the data listed and continue with the
installation. You reach a progress screen that documents the installation progress, like the one below
As the installation finishes, you should a picture like the following one

At this point, the SAP HANA instance should be up and running and ready for usage. You should be able to connect
to it from SAP HANA Studio. Also make sure that you check for the latest patches of SAP HANA and apply those
patches.
SAP HANA Large Instances high availability and
disaster recovery on Azure
10/3/2017 54 min to read Edit Online

High availability and disaster recovery (DR) are important aspects of running your mission-critical SAP HANA on
Azure (Large Instances) server. It's important to work with SAP, your system integrator, or Microsoft to properly
architect and implement the right high-availability and disaster-recovery strategy. It is also important to consider
the recovery point objective (RPO) and the recovery time objective, which are specific to your environment.
Microsoft supports some SAP HANA high-availability capabilities with HANA Large Instances. These capabilities
include:
Storage replication: The storage system's ability to replicate all data to another HANA Large Instance stamp
in another Azure region. SAP HANA operates independently of this method.
HANA system replication: The replication of all data in SAP HANA to a separate SAP HANA system. The
recovery time objective is minimized through data replication at regular intervals. SAP HANA supports
asynchronous, synchronous in-memory, and synchronous modes. Synchronous mode is recommended only
for SAP HANA systems that are within the same datacenter or less than 100 km apart. In the current design of
HANA large-instance stamps, HANA system replication can be used for high availability only. Currently, HANA
system replication requires a third-party reverse-proxy component for disaster-recovery configurations into
another Azure region.
Host auto-failover: A local fault-recovery solution for SAP HANA to use as an alternative to HANA system
replication. If the master node becomes unavailable, you configure one or more standby SAP HANA nodes in
scale-out mode, and SAP HANA automatically fails over to a standby node.
SAP HANA on Azure (Large Instances) is offered in two Azure regions that cover three different geopolitical
regions (US, Australia, and Europe). Two different regions that host HANA Large Instance stamps are connected to
separate dedicated network circuits that are used for replicating storage snapshots to provide disaster-recovery
methods. The replication is not established by default. It is set up for customers that ordered disaster-recovery
functionality. Storage replication is dependent on the usage of storage snapshots for HANA Large Instances. It is
not possible to choose an Azure region as a DR region that is in a different geopolitical area.
The following table shows the currently supported high-availability and disaster-recovery methods and
combinations:

SCENARIO SUPPORTED IN
HANA LARGE INSTANCES HIGH-AVAILABILITY OPTION DISASTER-RECOVERY OPTION COMMENTS

Single node Not available. Dedicated DR setup.


Multipurpose DR setup.

Host auto-failover: N+m Possible with the standby Dedicated DR setup. HANA volume sets are
including 1+1 taking the active role. Multipurpose DR setup. attached to all the nodes
HANA controls the role DR synchronization by using (n+m).
switch. storage replication. DR site must have the same
number of nodes.
SCENARIO SUPPORTED IN
HANA LARGE INSTANCES HIGH-AVAILABILITY OPTION DISASTER-RECOVERY OPTION COMMENTS

HANA system replication Possible with primary or Dedicated DR setup. Separate set of disk volumes
secondary setup. Multipurpose DR setup. are attached to each node.
Secondary moves to primary DR synchronization by using Only disk volumes of
role in a failover case. storage replication. secondary replica in the
HANA system replication DR by using HANA system production site get
and OS control failover. replication is not yet replicated to the DR
possible without third-party location.
components. One set of volumes is
required at the DR site.

A dedicated DR setup is where the HANA Large Instance unit in the DR site is not used for running any other
workload or non-production system. The unit is passive and is deployed only if a disaster failover is executed.
Though, this is not a preferred choice for many customers.
A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a non-production workload.
In case of disaster, you shut down the non-production system, you mount the storage-replicated (additional)
volume sets, and then you start the production HANA instance. Most customers who use the HANA Large Instance
disaster-recovery functionality, use this configuration.
You can find more information on SAP HANA high availability in the following SAP articles:
SAP HANA High Availability Whitepaper
SAP HANA Administration Guide
SAP Academy Video on SAP HANA System Replication
SAP Support Note #1999880 FAQ on SAP HANA System Replication
SAP Support Note #2165547 SAP HANA Back up and Restore within SAP HANA System Replication
Environment
SAP Support Note #1984882 Using SAP HANA System Replication for Hardware Exchange with
Minimum/Zero Downtime

Network considerations for disaster recovery with HANA Large


Instances
To take advantage of the disaster-recovery functionality of HANA Large Instances, you need to design network
connectivity to the two different Azure regions. You need an Azure ExpressRoute circuit connection from on-
premises in your main Azure region and another circuit connection from on-premises to your disaster-recovery
region. This measure covers a situation where there is a problem in an Azure region, including a Microsoft
Enterprise Edge Router (MSEE) location.
As a second measure, you can connect all Azure virtual networks that connect to SAP HANA on Azure (Large
Instances) in one of the regions to an ExpressRoute circuit that connects HANA Large Instances in the other
region. With this cross connect, services running on an Azure virtual network in Region #1, can connect to HANA
Large Instance units in Region #2 and the other way around. This measure addresses a case where only one of the
MSEE locations that connects to your on-premises location with Azure goes offline.
The following graphic illustrates a resilient configuration for disaster recovery:
Other requirements when you use HANA Large Instances storage
replication for disaster recovery
Additional requirements for a disaster-recovery setup with HANA Large Instances are:
You must order SAP HANA on Azure (Large Instances) SKUs of the same size as your production SKUs and
deploy them in the disaster-recovery region. In the current customer deployments, these instances are used to
run non-production HANA instances. We refer to them as multipurpose DR setups.
You must order additional storage on the DR site for each of your SAP HANA on Azure (Large Instances) SKUs
that you want to recover in the disaster-recovery site. Buying additional storage lets you allocate the storage
volumes. You can allocate the volumes that are the target of the storage replication from your production
Azure region into the disaster-recovery Azure region.

Backup and restore


One of the most important aspects to operating databases is to protect them make from various catastrophic
events. The cause of these events can be anything from natural disasters to simple user errors.
Backing up a database, with the ability to restore it to any point in time (such as before someone deleted critical
data), enables restoration to a state that is as close as possible to the way it was prior to the disruption.
Two types of backups must be performed for best results:
Database backups: full, incremental, or differential backups
Transaction-log backups
In addition to full-database backups performed at an application level, you can perform backups with storage
snapshots. Storage snapshots do not replace transaction-log backups. Transaction-log backups remain important
to restore the database to a certain point in time or to empty the logs from already committed transactions.
However, storage snapshots can accelerate recovery by quickly providing a roll-forward image of the database.
SAP HANA on Azure (Large Instances) offers two backup and restore options:
Do it yourself (DIY). After you calculate to ensure there is enough disk space, perform full database and log
backups by using disk backup methods. You can back up either directly to volumes attached to the HANA
Large Instance units or to Network File Shares (NFS) set up in an Azure virtual machine (VM). In the latter
case, customers set up a Linux VM in Azure, attach Azure Storage to the VM, and share the storage through
a configured NFS server in that VM. If you perform the backup against volumes that directly attach to
HANA Large Instance units, you need to copy the backups to an Azure storage account (after you set up an
Azure VM that exports NFS shares that are based on Azure Storage). Or you can use either an Azure
backup vault or Azure cold storage.
Another option is to use a third-party data protection tool to store the backups after they are copied to an
Azure storage account. The DIY backup option might also be necessary for data that you need to store for
longer periods of time for compliance and auditing purposes. In all cases, the backups are copied into NFS
shares represented through a VM and Azure Storage.
Use the backup and restore functionality that the underlying infrastructure of SAP HANA on Azure (Large
Instances) provides. This option fulfills the need for backups and fast restores. The rest of this section
addresses the backup and restore functionality that's offered with HANA Large Instances. This section also
covers the relationship backup and restore has to the disaster-recovery functionality offered by HANA
Large Instances.

NOTE
The snapshot technology that is used by the underlying infrastructure of HANA Large Instances has a dependency on SAP
HANA snapshots. At this point, SAP HANA snapshots do not work in conjunction with multiple tenants of SAP HANA
multitenant database containers. Thus, this method of backup cannot be used when you deploy multiple tenants in SAP
HANA multitenant database containers. If only one tenant is deployed, SAP HANA snapshots do work.

Using storage snapshots of SAP HANA on Azure (Large Instances)


The storage infrastructure underlying SAP HANA on Azure (Large Instances) supports storage snapshots of
volumes. Both backup and restoration of volumes is supported, with the following considerations:
Instead of full-database backups, storage volume snapshots are taken on a frequent basis.
When triggering a snapshot over /hana/data, hana/log, and /hana/shared (includes /usr/sap) volumes, the
storage snapshot initiates an SAP HANA snapshot before it executes the storage snapshot. This SAP HANA
snapshot is the setup point for eventual log restorations after recovery of the storage snapshot.
After the point where the storage snapshot has been executed successfully, the SAP HANA snapshot is deleted.
Transaction-log backups are taken frequently and are stored in the /hana/logbackups volume or in Azure. You
can trigger the /hana/logbackups volume that contains the transaction-log backups to take a snapshot
separately. In that case, you do not need to execute a HANA snapshot.
If you must restore a database to a certain point in time, request Microsoft Azure Support (for a production
outage) or SAP HANA on Azure Service Management to restore to a certain storage snapshot. An example is a
planned restoration of a sandbox system to its original state.
The SAP HANA snapshot that's included in the storage snapshot is an offset point for applying transaction-log
backups that have been executed and stored after the storage snapshot was taken.
These transaction-log backups are taken to restore the database back to a certain point in time.
You can perform storage snapshots targeting three different classes of volumes:
A combined snapshot over /hana/data, and /hana/shared (includes /usr/sap). This snapshot requires the
creation of an SAP HANA snapshot as preparation for the storage snapshot. The SAP HANA snapshot will
make sure that the database is in a consistent state from a storage point of view.
A separate snapshot over /hana/logbackups.
An OS partition (only for Type I of HANA Large Instances).
Storage snapshot considerations

NOTE
Storage snapshots consume storage space that has been allocated to the HANA Large Instance units. Therefore, you need
to consider the following aspects of scheduling storage snapshots and how many storage snapshots to keep.

The specific mechanics of storage snapshots for SAP HANA on Azure (Large Instances) include:
A specific storage snapshot (at the point in time when it is taken) consumes little storage.
As data content changes and the content in SAP HANA data files change on the storage volume, the snapshot
needs to store the original block content, as well as the data changes.
As a result, the storage snapshot increases in size. The longer the snapshot exists, the larger the storage
snapshot becomes.
The more changes that are made to the SAP HANA database volume over the lifetime of a storage snapshot,
the larger the space consumption of the storage snapshot.
SAP HANA on Azure (Large Instances) comes with fixed volume sizes for the SAP HANA data and log volumes.
Performing snapshots of those volumes eats into your volume space. You need to determine when to schedule
storage snapshots. You also need to monitor the space consumption of the storage volumes, as well as manage
the number of snapshots that you store. You can disable the storage snapshots when you either import masses of
data or perform other significant changes to the HANA database.
The following sections provide information for performing these snapshots, including general recommendations:
Though the hardware can sustain 255 snapshots per volume, we highly recommend that you stay well below
this number.
Before you perform storage snapshots, monitor and keep track of free space.
Lower the number of storage snapshots based on free space. You can lower the number of snapshots that you
keep, or you can extend the volumes. You can order additional storage in 1-terabyte units.
During activities such as moving data into SAP HANA with SAP platform migration tools (R3load) or restoring
SAP HANA databases from backups, disable storage snapshots on the /hana/data volume.
During larger reorganizations of SAP HANA tables, storage snapshots should be avoided, if possible.
Storage snapshots are a prerequisite to taking advantage of the disaster-recovery capabilities of SAP HANA on
Azure (Large Instances).
Setting up storage snapshots
The steps to set up storage snapshots with HANA Large Instances are as follows:
1. Make sure that Perl is installed on the Linux operating system on the HANA Large Instances server.
2. Modify the /etc/ssh/ssh_config to add the line MACs hmac-sha1.
3. Create an SAP HANA backup user account on the master node for each SAP HANA instance you are running, if
applicable.
4. Install the SAP HANA HDB client on all the SAP HANA Large Instances servers.
5. On the first SAP HANA Large Instances server of each region, create a public key to access the underlying
storage infrastructure that controls snapshot creation.
6. Copy the scripts and configuration file from GitHub to the location of hdbsql in the SAP HANA installation.
7. Modify the HANABackupDetails.txt file as necessary for the appropriate customer specifications.
Step 1: Install the SAP HANA HDB client
The Linux operating system installed on SAP HANA on Azure (Large Instances) includes the folders and scripts
necessary to execute SAP HANA storage snapshots for backup and disaster-recovery purposes. Check for more
recent releases in GitHub. The most recent release version of the scripts is 2.1. However, it is your responsibility to
install the SAP HANA HDB client on the HANA Large Instance units while you are installing SAP HANA. (Microsoft
does not install the HDB client or SAP HANA.)
Step 2: Change the /etc/ssh/ssh_config
Change /etc/ssh/ssh_config by adding the MACs hmac-sha1 line as shown here:

# RhostsRSAAuthentication no
# RSAAuthentication yes
# PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# GSSAPIKeyExchange no
# GSSAPITrustDNS no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/identity
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# Port 22
Protocol 2
# Cipher 3des
# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
# MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
MACs hmac-sha1
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com

Step 3: Create a public key


To enable access to the storage snapshot interfaces of your HANA Large Instance tenant, you need to establish a
sign-in through a public key. On the first SAP HANA on Azure (Large Instances) server in your tenant, create a
public key to be used to access the storage infrastructure so you can create snapshots. The public key ensures that
a password is not required to sign in to the storage snapshot interfaces. Creating a public key also means that you
do not need to maintain password credentials. In Linux on the SAP HANA Large Instances server, execute the
following command to generate the public key:

ssh-keygen t dsa b 1024

The new location is _/root/.ssh/id_dsa.pub. Do not enter an actual password, or else you are required to enter
the password each time you sign in. Instead, select Enter twice to remove the "enter password" requirement for
signing in.
Check to make sure that the public key was corrected as expected by changing folders to /root/.ssh/ and then
executing the ls command. If the key is present, you can copy it by running the following command:
At this point, contact SAP HANA on Azure Service Management and provide them with the public key. The service
representative uses the public key to register it in the underlying storage infrastructure that is carved out for your
HANA Large Instance tenant.
Step 4: Create an SAP HANA user account
To initiate the creation of SAP HANA snapshots, you need to create a user account in SAP HANA that the storage
snapshot scripts can use. Create an SAP HANA user account within SAP HANA Studio for this purpose. This
account must have the following privileges: Backup Admin and Catalog Read. In this example, the username is
SCADMIN. The user account name created in HANA Studio is case-sensitive. Make sure to select No for requiring
the user to change the password on the next sign-in.

Step 5: Authorize the SAP HANA user account


In this step, you authorize the SAP HANA user account that you created, so that the scripts don't need to submit
passwords at runtime. The SAP HANA command hdbuserstore enables the creation of an SAP HANA user key,
which is stored on one or more SAP HANA nodes. The user key lets the user access SAP HANA without having to
manage passwords from within the scripting process. The scripting process is discussed later.

IMPORTANT
Run the following command as root . Otherwise, the script cannot work properly.

Enter the hdbuserstore command as follows:


For non MDC HANA setup
hdbuserstore set <key> <host><3[instance]15> <user> <password>

For MDC HANA setup

hdbuserstore set <key> <host><3[instance]13> <user> <password>

In the following example, the user is SCADMIN01, the hostname is lhanad01, and the instance number is 01:

hdbuserstore set SCADMIN01 lhanad01:30115 <backup username> <password>

If you have an SAP HANA scale-out configuration, you should manage all scripting from a single server. In this
example, the SAP HANA key SCADMIN01 must be altered for each host in a way that reflects which host is
related to the key. Amend the SAP HANA backup account with the instance number of the HANA DB. The key
must have administrative privileges on the host it is assigned to, and the backup user for scale-out configurations
must have access rights to all the SAP HANA instances. Assuming the three scale-out nodes have the names
lhanad01, lhanad02, and lhanad03, the sequence of commands looks like this:

hdbuserstore set SCADMIN01 lhanad01:30115 SCADMIN <password>


hdbuserstore set SCADMIN01 lhanad02:30115 SCADMIN <password>
hdbuserstore set SCADMIN01 lhanad03:30115 SCADMIN <password>

Step 6: Get the snapshot scripts, configure the snapshots, and test the configuration and connectivity
Download the most recent version of the scripts from GitHub. Copy the downloaded scripts and the text file to the
working directory for hdbsql. For current HANA installations, this directory is like
/hana/shared/D01/exe/linuxx86_64/hdb.

azure_hana_backup.pl
azure_hana_replication_status.pl
azure_hana_snapshot_details.pl
azure_hana_snapshot_delete.pl
testHANAConnection.pl
testStorageSnapshotConnection.pl
removeTestStorageSnapshot.pl
HANABackupCustomerDetails.txt

Here is the purpose of the different scripts and files:


azure_hana_backup.pl: Schedule this script with cron to execute storage snapshots on either the HANA
data/log/shared volumes, the /hana/logbackups volume, or the OS (on Type I SKUs of HANA Large Instances).
azure_hana_replication_status.pl: This script provides the basic details around the replication status from
the production site to the disaster-recovery site. The script monitors to ensure that the replication is taking
place, and it shows the size of the items that are being replicated. It also provides guidance if a replication is
taking too long or if the link is down.
azure_hana_snapshot_details.pl: This script provides a list of basic details about all the snapshots, per
volume, that exist in your environment. This script can be run on the primary server or on a server unit in the
disaster-recovery location. The script provides the following information broken down by each volume that
contains snapshots:
Size of total snapshots in a volume
Each snapshot in that volume includes the following details:
Snapshot name
Create time
Size of the snapshot
Frequency of the snapshot
HANA Backup ID associated with that snapshot, if relevant
azure_hana_snapshot_delete.pl: This script deletes a storage snapshot or a set of snapshots. You can use
either the SAP HANA backup ID as found in HANA Studio or the storage snapshot name. Currently, the backup
ID is only tied to the snapshots created for the HANA data/log/shared volumes. Otherwise, if the snapshot ID is
entered, it seeks all snapshots that match the entered snapshot ID.
testHANAConnection.pl: This script tests the connection to the SAP HANA instance and is required to set up
the storage snapshots.
testStorageSnapshotConnection.pl: This script has two purposes. First, it ensures that the HANA Large
Instance unit that runs the scripts has access to the assigned storage virtual machine and to the storage
snapshot interface of your HANA Large Instances. The second purpose is to create a temporary snapshot for
the HANA instance you are testing. This script should be run for every HANA instance on a server to ensure
that the backup scripts function as expected.
removeTestStorageSnapshot.pl: This script deletes the test snapshot as created with the script
testStorageSnapshotConnection.pl.
HANABackupCustomerDetails.txt: This file is a modifiable configuration file that you need to modify to
adapt to your SAP HANA configuration.
The HANABackupCustomerDetails.txt file is the control and configuration file for the script that runs the storage
snapshots. Adjust the file for your purposes and setup. You should have received the Storage Backup Name and
Storage IP Address from SAP HANA on Azure Service Management when your instances were deployed. You
cannot modify the sequence, ordering, or spacing of any of the variables in this file. Otherwise, the scripts are not
going to run properly. Additionally, you received the IP address of the scale-up node or the master node (if scale-
out) from SAP HANA on Azure Service Management. You also know the HANA instance number that you got
during the installation of SAP HANA. Now you need to add a backup name to the configuration file.
For a scale-up or scale-out deployment, the configuration file would look like the following example after you
filled in the storage backup name and the storage IP address. You also need to fill in the following data in the
configuration file:
Single node or master node IP address
HANA instance number
Backup name

#Provided by Microsoft Service Management


Storage Backup Name: client1hm3backup
Storage IP Address: 10.240.20.31
#Node IP addresses, instance numbers, and HANA backup name
#provided by customer. HANA backup name created using
#hdbuserstore utility.
Node 1 IP Address:
Node 1 HANA instance number:
Node 1 HANA userstore Name:

NOTE
Currently, only Node 1 details are used in the actual HANA storage snapshot script. We recommend that you test access to
or from all HANA nodes so that, if the master backup node ever changes, you have already ensured that any other node
can take its place by modifying the details in Node 1.

After you put all the configuration data into the HANABackupCustomerDetails.txt file, you need to check whether
the configurations are correct regarding the HANA instance data. Use the script testHANAConnection.pl . This script
is independent of an SAP HANA scale-up or scale-out configuration.

testHANAConnection.pl

If you have an SAP HANA scale-out configuration, ensure that the master HANA instance has access to all the
required HANA servers and instances. There are no parameters to the test script, but you must add your data into
the HANABackupCustomerDetails.txt configuration file for the script to run properly. Only the shell command
error codes are returned, so it is not possible for the script to error check every instance. Even so, the script does
provide some helpful comments for you to double-check.
To run the script, enter the following command:

./testHANAConnection.pl

If the script successfully obtains the status of the HANA instance, it displays a message that the HANA connection
was successful.
The next test step is to check the connectivity to the storage based on the data you put into the
HANABackupCustomerDetails.txt configuration file, and then execute a test snapshot. Before you execute the
azure_hana_backup.pl script, you must execute this test. If a volume contains no snapshots, it is impossible to
determine whether the volume is empty or if there is an SSH failure to obtain the snapshot details. For this reason,
the script executes two steps:
It verifies that the tenant's storage virtual machine and interfaces are accessible for the scripts to execute
snapshots.
It creates a test, or dummy, snapshot for each volume by HANA instance.
For this reason, the HANA instance is included as an argument. If the execution fails, it is not possible to provide
error checking for the storage connection. Even if there is no error checking, the script provides helpful hints.
The script is run as:

./testStorageSnapshotConnection.pl <HANA SID>

Next, the script tries to sign in to the storage by using the public key provided in the previous setup steps and with
the data configured in the HANABackupCustomerDetails.txt file. If sign-in is successful, the following content is
shown:

**********************Checking access to Storage**********************


Storage Access successful!!!!!!!!!!!!!!

If problems occur connecting to the storage console, the output looks like this:

**********************Checking access to Storage**********************


WARNING: Storage check status command 'volume show -type RW -fields volume' failed: 65280
WARNING: Please check the following:
WARNING: Was publickey sent to Microsoft Service Team?
WARNING: If passphrase entered while using tool, publickey must be re-created and passphrase must be left
blank for both entries
WARNING: Ensure correct IP address was entered in HANABackupCustomerDetails.txt
WARNING: Ensure correct Storage backup name was entered in HANABackupCustomerDetails.txt
WARNING: Ensure that no modification in format HANABackupCustomerDetails.txt like additional lines, line
numbers or spacing
WARNING: ******************Exiting Script*******************************
After a successful sign-in to the storage virtual machine interfaces, the script continues with phase #2 and creates
a test snapshot. The output is shown here for a three-node scale-out configuration of SAP HANA:

**********************Creating Storage snapshot**********************


Taking snapshot testStorage.recent for hana_data_hm3_mnt00001_t020_dp ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_data_hm3_mnt00001_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_data_hm3_mnt00002_t020_dp ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_data_hm3_mnt00002_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_data_hm3_mnt00003_t020_dp ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_data_hm3_mnt00003_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_log_backups_hm3_t020_dp ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_log_backups_hm3_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_log_hm3_mnt00001_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_log_hm3_mnt00002_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_log_hm3_mnt00003_t020_vol ...
Snapshot created successfully.
Taking snapshot testStorage.recent for hana_shared_hm3_t020_vol ...
Snapshot created successfully.

If the test snapshot has been executed successfully with the script, you can proceed with configuring the actual
storage snapshots. If it is not successful, investigate the problems before going ahead. The test snapshot should
stay around until the first real snapshots are done.
Step 7: Perform snapshots
As all the preparation steps are finished, you can start to configure the actual storage snapshot configuration. The
script to be scheduled works with SAP HANA scale-up and scale-out configurations. You should schedule the
execution of the scripts via cron.
Three types of snapshot backups can be created:
HANA: Combined snapshot backup in which the volumes that contain /hana/data and /hana/shared (which
contains /usr/sap as well) are covered by the coordinated snapshot. A single file restore is possible from this
snapshot.
Logs: Snapshot backup of the /hana/logbackups volume. No HANA snapshot is triggered to execute this
storage snapshot. This storage volume is the volume meant to contain the SAP HANA transaction-log backups.
SAP HANA transaction-log backups are performed more frequently to restrict log growth and prevent
potential data loss. A single file restore is possible from this snapshot. You should not lower the frequency to
under five minutes.
Boot: Snapshot of the volume that contains the boot logical unit number (LUN) of the HANA Large Instance.
This snapshot backup is possible only with the Type I SKUs of HANA Large Instances. You can't perform single
file restores from the snapshot of the volume that contains the boot LUN.
The call syntax for these three different types of snapshots looks like this:
HANA backup covering /hana/data and /hana/shared (includes/usr/sap)
./azure_hana_backup.pl hana <HANA SID> manual 30

For /hana/logbackups snapshot


./azure_hana_backup.pl logs <HANA SID> manual 30

For snapshot of the volume storing the boot LUN


./azure_hana_backup.pl boot none manual 30

The following parameters need to be specified:


The first parameter characterizes the type of the snapshot backup. The values allowed are hana, logs, and
boot.
The second parameter is HANA SID (like HM3) or none. If the first parameters value provided is hana or
logs, then the value of this parameter is HANA SID (like HM3), else for boot volume backup, the value is
none.
The third parameter is a snapshot or backup label for the type of snapshot. It has two purposes. The one
purpose for you is to give it a name, so that you know what these snapshots are about. The second purpose is
for the script azure_hana_backup.pl to determine the number of storage snapshots that are retained under that
specific label. If you schedule two storage snapshot backups of the same type (like hana), with two different
labels, and define that 30 snapshots should be kept for each, you are going to end up with 60 storage
snapshots of the volumes affected.
The fourth parameter defines the retention of the snapshots indirectly, by defining the number of snapshots of
with the same snapshot prefix (label) to be kept. This parameter is important for a scheduled execution
through cron.
In the case of a scale-out, the script does some additional checking to ensure that you can access all the HANA
servers. The script also checks that all HANA instances return the appropriate status of the instances, before it
creates an SAP HANA snapshot. The SAP HANA snapshot is followed by a storage snapshot.
The execution of the script azure_hana_backup.pl creates the storage snapshot in the following three distinct
phases:
1. Executes an SAP HANA snapshot
2. Executes a storage snapshot
3. Removes the SAP HANA snapshot that was created before execution of the storage snapshot
To execute the script, you call it from the HDB executable folder that it was copied to.
The retention period is administered with the number of snapshots that are submitted as a parameter when you
execute the script (such as 30, shown previously). So, the amount of time that is covered by the storage snapshots
is a function of two things: the period of execution and the number of snapshots submitted as a parameter when
executing the script. If the number of snapshots that are kept exceeds the number that are named as a parameter
in the call of the script, the oldest storage snapshot of the same label (in our previous case, manual) is deleted
before a new snapshot is executed. The number you give as the last parameter of the call is the number you can
use to control the number of snapshots that are kept. With this number, you can also control, indirectly, the disk
space used for snapshots.

NOTE
As soon as you change the label, the counting starts again. This means you need to be strict in labeling so your snapshots
are not accidentally deleted.

Snapshot strategies
The frequency of snapshots for the different types depends on whether you use the HANA Large Instance
disaster-recovery functionality or not. The disaster-recovery functionality of HANA Large Instances relies on
storage snapshots. Relying on storage snapshots might require some special recommendations in terms of the
frequency and execution periods of the storage snapshots.
In the considerations and recommendations that follow, we assume that you do not use the disaster-recovery
functionality HANA Large Instances offers. Instead, you use the storage snapshots as a way to have backups and
be able to provide point-in-time recovery for the last 30 days. Given the limitations of the number of snapshots
and space, customers have considered the following requirements:
The recovery time for point-in-time recovery.
The space used.
The recovery point objective and the recovery time objective for potential disaster recovery.
The eventual execution of HANA full-database backups against disks. Whenever a full-database backup against
disks or the backint interface is performed, the execution of the storage snapshots fails. If you plan to execute
full-database backups on top of storage snapshots, make sure that the execution of the storage snapshots is
disabled during this time.
The number of snapshots per volume is limited to 255.
For customers who don't use the disaster-recovery functionality of HANA Large Instances, the snapshot period is
less frequent. In such cases, we see customers performing the combined snapshots on /hana/data and
/hana/shared (includes /usr/sap) in 12-hour or 24-hour periods, and they keep the snapshots to cover a whole
month. The same is true with the snapshots of the log backup volume. However, the execution of SAP HANA
transaction-log backups against the log backup volume occurs in 5-minute to 15-minute periods.
We encourage you to perform scheduled storage snapshots by using cron. We also recommend that you use the
same script for all backups and disaster-recovery needs. You need to modify the script inputs to match the various
requested backup times. These snapshots are all scheduled differently in cron depending on their execution time:
hourly, 12-hour, daily, or weekly.
An example of a cron schedule in /etc/crontab might look like this:

00 1-23 * * * ./azure_hana_backup.pl hana HM3 hourlyhana 46


10 00 * * * ./azure_hana_backup.pl hana HM3 dailyhana 28
00,05,10,15,20,25,30,35,40,45,50,55 * * * * Perform SAP HANA transaction log backup
22 12 * * * ./azure_hana_backup.pl log HM3 dailylogback 28
30 00 * * * ./azure_hana_backup.pl boot dailyboot 28

In the previous example, there is an hourly combined snapshot that covers the volumes that contain the
/hana/data and /hana/shared (includes /usr/sap) locations. This type of snapshot would be used for a faster point-
in-time recovery within the past two days. Additionally, there is a daily snapshot on those volumes. So, you have
two days of coverage by hourly snapshots, plus four weeks of coverage by daily snapshots. Additionally, the
transaction-log backup volume is backed up once every day. These backups are kept for four weeks as well. As
you see in the third line of crontab, the backup of the HANA transaction log is scheduled to execute every five
minutes. The start minutes of the different cron jobs that execute storage snapshots are staggered, so that those
snapshots are not executed all at once at a certain point in time.
In the following example, you perform a combined snapshot that covers the volumes that contain the /hana/data
and /hana/shared (including /usr/sap) locations on an hourly basis. You keep these snapshots for two days. The
snapshots of the transaction-log backup volumes are executed on a five-minute basis and are kept for four hours.
As before, the backup of the HANA transaction log file is scheduled to execute every five minutes. The snapshot of
the transaction-log backup volume is performed with a two-minute delay after the transaction-log backup has
started. Within those two minutes, the SAP HANA transaction-log backup should finish under normal
circumstances. As before, the volume that contains the boot LUN is backed up once per day by a storage snapshot
and is kept for four weeks.
10 0-23 * * * ./azure_hana_backup.pl hana HM3 hourlyhana 48
0,5,10,15,20,25,30,35,40,45,50,55 * * * * Perform SAP HANA transaction log backup
2,7,12,17,22,27,32,37,42,47,52,57 * * * * ./azure_hana_backup.pl log HM3 logback 48
30 00 * * * ./azure_hana_backup.pl boot dailyboot 28

The following graphic illustrates the sequences of the previous example, excluding the boot LUN:

SAP HANA performs regular writes against the /hana/log volume to document the committed changes to the
database. On a regular basis, SAP HANA writes a savepoint to the /hana/data volume. As specified in crontab, an
SAP HANA transaction-log backup is executed every five minutes. You also see that an SAP HANA snapshot is
executed every hour as a result of triggering a combined storage snapshot over the /hana/data and /hana/shared
volumes. After the HANA snapshot succeeds, the combined storage snapshot is executed. As instructed in crontab,
the storage snapshot on the /hana/logbackup volume is executed every five minutes, around two minutes after
the HANA transaction-log backup.

IMPORTANT
The use of storage snapshots for SAP HANA backups is valuable only when the snapshots are performed in conjunction
with SAP HANA transaction-log backups. These transaction-log backups need to be able to cover the time periods between
the storage snapshots.

If you've set a commitment to users of a point-in-time recovery of 30 days, do the following:


In extreme cases, you need the ability to access a combined storage snapshot over/hana/data and
/hana/shared that is 30 days old.
Have contiguous transaction-log backups that cover the time between any of the combined storage snapshots.
So, the oldest snapshot of the transaction-log backup volume needs to be 30 days old. This is not the case if
you copy the transaction-log backups to another NFS share that is located on Azure storage. In that case, you
might pull old transaction-log backups from that NFS share.
To benefit from storage snapshots and the eventual storage replication of transaction-log backups, you need to
change the location that the SAP HANA writes the transaction-log backups to. You can make this change in HANA
Studio. Though SAP HANA backs up full log segments automatically, you should specify a log backup interval to
be deterministic. This is especially true when you use the disaster-recovery option, because you usually want to
execute log backups with a deterministic period. In the following case, we took 15 minutes as the log backup
interval.

You can choose backups that are more frequent than every 15 minutes. This is frequently done in conjunction
with disaster recovery. Some customers perform transaction-log backups every five minutes.
If the database has never been backed up, the final step is to perform a file-based database backup to create a
single backup entry that must exist within the backup catalog. Otherwise, SAP HANA cannot initiate your specified
log backups.

After your first successful storage snapshots have been executed, you can also delete the test snapshot that was
executed in step 6. To do so, run the script removeTestStorageSnapshot.pl :

./removeTestStorageSnapshot.pl <hana instance>

Monitoring the number and size of snapshots on the disk volume


On a particular storage volume, you can monitor the number of snapshots and the storage consumption of those
snapshots. The ls command doesn't show the snapshot directory or files. However, the Linux OS command du
shows details about those storage snapshots, because they are stored on the same volumes. The command can be
used with the following options:
du sh .snapshot : Provides a total of all the snapshots within the snapshot directory.
du sh --max-depth=1 : Lists all the snapshots that are saved in the .snapshot folder and the size of each
snapshot.
du hc : Provides the total size used by all the snapshots.

Use these commands to make sure that the snapshots that are taken and stored are not consuming all the storage
on the volumes.

NOTE
The snapshots of the boot LUN are not visible with the previous commands.

Getting details of snapshots


To get more details on snapshots, you can also use the script azure_hana_snapshot_details.pl . This script can be
run in either location if there is an active server in the disaster-recovery location. The script provides the following
output, broken down by each volume that contains snapshots:
Size of total snapshots in a volume
Each snapshot in that volume includes the following details:
Snapshot name
Create time
Size of the snapshot
Frequency of the snapshot
HANA Backup ID associated with that snapshot, if relevant
The execution syntax of the script looks like this:

./azure_hana_snapshot_details.pl

Because the script tries to retrieve the HANA backup ID, it needs to connect to the SAP HANA instance. This
connection requires the configuration file HANABackupCustomerDetails.txt to be correctly set. An output of two
snapshots on a volume might look like this:

**********************************************************
****Volume: hana_shared_SAPTSTHDB100_t020_vol ***********
**********************************************************
Total Snapshot Size: 411.8MB
----------------------------------------------------------
Snapshot: customer.2016-09-20_1404.0
Create Time: "Tue Sep 20 18:08:35 2016"
Size: 2.10MB
Frequency: customer
HANA Backup ID:
----------------------------------------------------------
Snapshot: customer2.2016-09-20_1532.0
Create Time: "Tue Sep 20 19:36:21 2016"
Size: 2.37MB
Frequency: customer2
HANA Backup ID:

File -level restore from a storage snapshot


For the snapshot types hana and logs, you are able to access the snapshots directly on the volumes in the
.snapshot directory. There is a subdirectory for each of the snapshots. You should be able to copy each file that is
covered by the snapshot in the state it had at the point of the snapshot from that subdirectory into the actual
directory structure.
NOTE
Single file restore does not work for snapshots of the boot LUN. The .snapshot directory is not exposed in the boot LUN.

Reducing the number of snapshots on a server


As explained earlier, you can reduce the number of certain labels of snapshots that you store. The last two
parameters of the command to initiate a snapshot are the label and the number of snapshots you want to retain.

./azure_hana_backup.pl hana HM3 hanadaily 30

In the previous example, the snapshot label is customer and the number of snapshots with this label to be
retained is 30. As you respond to disk space consumption, you might want to reduce the number of stored
snapshots. The easy way to reduce the number of snapshots to 15, for example, is to run the script with the last
parameter set to 15:

./azure_hana_backup.pl hana HM3 hanadaily 15

If you run the script with this setting, the number of snapshots, including the new storage snapshot, is 15. The 15
most recent snapshots are kept, whereas the 15 older snapshots are deleted.

NOTE
This script reduces the number of snapshots only if there are snapshots that are more than one hour old. The script does
not delete snapshots that are less than one hour old. These restrictions are related to the optional disaster-recovery
functionality offered.

If you no longer want to maintain a set of snapshots with a specific backup label hanadaily in the syntax
examples, you can execute the script with 0 as the retention number. This removes all snapshots matching that
label. However, removing all snapshots can affect the capabilities of disaster recovery.
A second possibility to delete specific snapshots is to use the script azure_hana_snapshot_delete.pl . This script is
designed to delete a snapshot or set of snapshots either by using the HANA backup ID as found in HANA Studio
or through the snapshot name itself. Currently, the backup ID is only tied to the snapshots created for the hana
snapshot type. Snapshot backups of the type logs and boot do not perform an SAP HANA snapshot. Therefore,
there is no backup ID to be found for those snapshots. If the snapshot name is entered, it looks for all snapshots
on the different volumes that match the entered snapshot name. The call syntax of the script is:

./azure_hana_snapshot_delete.pl

Execute the script as user root.


If you select a snapshot, you have the ability to delete each snapshot individually. You first supply the volume that
contains the snapshot, and then you supply the snapshot name. If the snapshot exists in that volume and is more
than one hour old, it is deleted. You can find the volume names and snapshot names by executing the
azure_hana_snapshot_details script.

IMPORTANT
If there is data that only exists on the snapshot that you are deleting, then if you execute the deletion, the data is lost
forever.
Recovering to the most recent HANA snapshot
If you experience a production-down scenario, the process of recovering from a storage snapshot can be initiated
as a customer incident with Microsoft Azure Support. It is a high-urgency matter if data was deleted in a
production system and the only way to retrieve the data is to restore the production database.
In a different situation, a point-in-time recovery might be low urgency and planned days in advance. You can plan
this recovery with SAP HANA on Azure Service Management instead of raising a high-priority problem. For
example, you might be planning to upgrade the SAP software by applying a new enhancement package. You then
need to revert to a snapshot that represents the state before the enhancement package upgrade.
Before you send the request, you need to prepare. The SAP HANA on Azure Service Management team can then
handle the request and provide the restored volumes. Afterward, you restore the HANA database based on the
snapshots. Here is how to prepare for the request:

NOTE
Your user interface might vary from the following screenshots, depending on the SAP HANA release that you are using.

1. Decide which snapshot to restore. Only the hana/data volume is restored unless you instruct otherwise.
2. Shut down the HANA instance.

3. Unmount the data volumes on each HANA database node. If the data volumes are still mounted to the
operating system, the restoration of the snapshot fails.

4. Open an Azure support request to instruct them about the restoration of a specific snapshot.
During the restoration: SAP HANA on Azure Service Management might ask you to attend a
conference call to ensure coordination, verification, and confirmation that the correct storage
snapshot is restored.
After the restoration: SAP HANA on Azure Service Management notifies you when the storage
snapshot has been restored.
5. After the restoration process is complete, remount all the data volumes.

6. Select the recovery options within SAP HANA Studio, if they do not automatically come up when you
reconnect to HANA DB through SAP HANA Studio. The following example shows a restoration to the last
HANA snapshot. A storage snapshot embeds one HANA snapshot. If you restore to the most recent storage
snapshot, it should be the most recent HANA snapshot. (If you restore to an older storage snapshot, you
need to locate the HANA snapshot based on the time the storage snapshot was taken.)
7. Select Recover the database to a specific data backup or storage snapshot.

8. Select Specify backup without catalog.


9. In the Destination Type list, select Snapshot.

10. Select Finish to start the recovery process.


11. The HANA database is restored and recovered to the HANA snapshot that's included in the storage
snapshot.
Recovering to the most recent state
The following process restores the HANA snapshot that is included in the storage snapshot. It then restores the
transaction-log backups to the most recent state of the database before restoring the storage snapshot.

IMPORTANT
Before you proceed, make sure that you have a complete and contiguous chain of transaction-log backups. Without these
backups, you cannot restore the current state of the database.

1. Complete steps 1-6 of from Recovering to the most recent HANA snapshot.
2. Select Recover the database to its most recent state.
3. Specify the location of the most recent HANA log backups. The location needs to contain all the HANA
transaction-log backups from the HANA snapshot to the most recent state.

4. Select a backup as a base from which to recover the database. In our example, the HANA snapshot in the
screenshot is the HANA snapshot that was included in the storage snapshot.
5. Clear the Use Delta Backups check box if deltas do not exist between the time of the HANA snapshot and
the most recent state.
6. On the summary screen, select Finish to start the restoration procedure.
Recovering to another point in time
To recover to a point in time between the HANA snapshot (included in the storage snapshot) and one that is later
than the HANA snapshot point-in-time recovery, do the following:
1. Make sure that you have all the transaction-log backups from the HANA snapshot to the time you want to
recover to.
2. Begin the procedure under Recovering to the most recent state.
3. In step 2 of the procedure, in the Specify Recovery Type window, select Recover the database to the
following point in time, and specify the point in time. Then complete steps 3-6.
Monitoring the execution of snapshots
As you use storage snapshots of HANA Large Instances, you also need to monitor the execution of those storage
snapshots. The script that executes a storage snapshot writes output to a file and then saves it to the same
location as the Perl scripts. A separate file is written for each storage snapshot. The output of each file clearly
shows the various phases that the snapshot script executes:
1. Find the volumes that need to create a snapshot.
2. Find the snapshots taken from these volumes.
3. Delete eventual existing snapshots to match the number of snapshots you specified.
4. Create an SAP HANA snapshot.
5. Create the storage snapshot over the volumes.
6. Delete the SAP HANA snapshot.
7. Rename the most recent snapshot to .0.
The most important part of the script cab identified is this part:

**********************Creating HANA snapshot**********************


Creating the HANA snapshot with command: "./hdbsql -n localhost -i 01 -U SCADMIN01 "backup data create
snapshot"" ...
HANA snapshot created successfully.
**********************Creating Storage snapshot**********************
Taking snapshot hourly.recent for hana_data_lhanad01_t020_vol ...
Snapshot created successfully.
Taking snapshot hourly.recent for hana_log_backup_lhanad01_t020_vol ...
Snapshot created successfully.
Taking snapshot hourly.recent for hana_log_lhanad01_t020_vol ...
Snapshot created successfully.
Taking snapshot hourly.recent for hana_shared_lhanad01_t020_vol ...
Snapshot created successfully.
Taking snapshot hourly.recent for sapmnt_lhanad01_t020_vol ...
Snapshot created successfully.
**********************Deleting HANA snapshot**********************
Deleting the HANA snapshot with command: "./hdbsql -n localhost -i 01 -U SCADMIN01 "backup data drop
snapshot"" ...
HANA snapshot deletion successfully.

You can see from this sample how the script records the creation of the HANA snapshot. In the scale-out case, this
process is initiated on the master node. The master node initiates the synchronous creation of the SAP HANA
snapshots on each of the worker nodes. Then, the storage snapshot is taken. After the successful execution of the
storage snapshots, the HANA snapshot is deleted. The deletion of the HANA snapshot is initiated from the master
node.

Disaster recovery principles


With HANA Large Instances, we offer a disaster-recovery functionality between HANA Large Instance stamps in
different Azure regions. For instance, if you deploy HANA Large Instance units in the US West region of Azure, you
can use the HANA Large Instance units in the US East region as disaster-recovery units. As mentioned earlier,
disaster recovery is not configured automatically, because it requires you to pay for another HANA Large Instance
unit in the DR region. The disaster-recovery setup works for scale-up as well as scale-out setups.
In the scenarios deployed so far, our customers use the unit in the DR region to run non-production systems that
use an installed HANA instance. The HANA Large Instance unit needs to be of the same SKU as the SKU used for
production purposes. The disk configuration between the server unit in the Azure production region and the
disaster recovery region looks like this:
As shown in this overview graphic, you then need to order a second set of disk volumes. The target disk volumes
are the same size as the production volumes for the production instance in the disaster recovery units. These disk
volumes are associated with the HANA Large Instance server unit in the disaster recovery site. The following
volumes are replicated from the production region to the DR site:
/hana/data
/hana/logbackups
/hana/shared (includes /usr/sap)
The /hana/log volume is not replicated because the SAP HANA transaction log is not needed in the way that the
restore from those volumes is done.
The basis of the disaster-recovery functionality offered is the storage-replication functionality offered by the
HANA Large Instance infrastructure. The functionality that is used on the storage side is not a constant stream of
changes that replicate in an asynchronous manner as changes happen to the storage volume. Instead, it is a
mechanism that relies on the fact that snapshots of these volumes are created on a regular basis. The delta
between an already replicated snapshot and a new snapshot that is not yet replicated is then transferred to the
disaster-recovery site into target disk volumes. These snapshots are stored on the volumes and in the case of a
disaster recovery failover, need to be restored on those volumes.
The first transfer of the complete data of the volume should be before the amount of data becomes smaller than
the deltas between snapshots. As a result, the volumes in the DR site contain every one of the volume snapshots
performed in the production site. This fact enables you to eventually use that DR system to get to an earlier status
in order to recover lost data, without rolling back the production system.
In cases where you use HANA System Replication as high-availability functionality in your production site, only
the volumes of the Tier 2 (or replica) instance are replicated. This configuration might lead to a delay in storage
replication to the DR site if you maintain or take down the secondary replica (Tier 2) server unit or SAP HANA
instance in this unit.

IMPORTANT
As with multitier HANA System Replication, a shutdown of the Tier 2 HANA instance or server unit blocks replication to the
disaster-recovery site when you use the HANA Large Instance disaster-recovery functionality.

NOTE
The HANA Large Instance storage-replication functionality is mirroring and replicating storage snapshots. Therefore, if you
do not perform storage snapshots as introduced in the backup section of this document, there cannot be any replication to
the disaster-recovery site. Storage snapshot execution is a prerequisite to storage replication to the disaster-recovery site.

Preparation of the Disaster Recovery scenario


We assume that you have a production system running on HANA Large Instances in the production Azure region.
For the documentation following, let's assume that the SID of that HANA system is "PRD." We also assume that
you have a non-production system running on HANA Large Instances running in the disaster recovery Azure
region. For the documentation, we assume that its SID is "TST." So the configuration looks like this:
If the server instance has not been ordered already with the additional storage volume set, SAP HANA on Azure
Service Management will attach the additional set of volumes as a target for the production replica to the HANA
Large Instance unit that you are running the TST HANA instance on. For that purpose, you need to provide the SID
of your production HANA instance. After SAP HANA on Azure Service Management confirms the attachment of
those volumes, you need to mount those volumes to the HANA Large Instance unit.

The next step for you is to install the second SAP HANA instance on the HANA Large Instance unit in the disaster
recovery Azure region, where you run the TST HANA instance. The newly installed SAP HANA instance needs to
have the same SID. The users created need to have the same UID and Group ID that the production instance has. If
the installation succeeded, you need to:
Stop the newly installed SAP HANA instance on the HANA large Instance unit in the disaster recovery Azure
region.
Unmount these PRD volumes and contact SAP HANA on Azure Service Management. The volumes can't stay
mounted to the unit because they can't be accessible while functioning as storage replication target.
The operations team is going to establish the replication relationship between the PRD volumes in the production
Azure region and the PRD volumes in the disaster recovery Azure region.

IMPORTANT
The /hana/log volume will not be replicated because it is not necessary to restore the replicated SAP HANA database to a
consistent state in the disaster recovery site.

The next step for you is to set up or adjust the storage snapshot backup schedule to get to your RTO and RPO in
the disaster case. To minimize the recovery point objective, set the following replication intervals in the HANA
Large Instance service:
The volumes that are covered by the combined snapshot (snapshot type = hana) replicate every 15 minutes to
the equivalent storage volume targets in the disaster-recovery site.
The transaction-log backup volume (snapshot type = logs) replicates every three minutes to the equivalent
storage volume targets in the disaster-recovery site.
To minimize the recovery point objective, set up the following:
Perform a hana type storage snapshot (see "Step 7: Perform snapshots") every 30 minutes to 1 hour.
Perform SAP HANA transaction-log backups every 5 minutes.
Perform a logs type storage snapshot every 5-15 minutes. With this interval period, you should be able to
achieve an RPO of around 15-25 minutes.
With this setup, the sequence of transaction-log backups, storage snapshots, and the replication of the HANA
transaction-log backup volume and /hana/data, and /hana/shared (includes /usr/sap) might look like the data
shown in this graphic:
To achieve an even better RPO in the disaster-recovery case, you can copy the HANA transaction-log backups
from SAP HANA on Azure (Large Instances) to the other Azure region. To achieve this further RPO reduction,
perform the following rough steps:
1. Back up the HANA transaction log as frequently as possible to /hana/logbackups.
2. Use rsync to copy the transaction-log backups to the NFS share hosted Azure virtual machines. The VMs are in
Azure virtual networks in the Azure production region and in the DR regions. You need to connect both Azure
virtual networks to the circuit connecting the production HANA Large Instances to Azure. See the graphics in
the Network considerations for disaster recovery with HANA Large Instances section.
3. Keep the transaction-log backups in the region in the VM attached to the NFS exported storage.
4. In a disaster-failover case, supplement the transaction-log backups you find on the /hana/logbackups volume
with more recently taken transaction-log backups on the NFS share in the disaster-recovery site.
5. Now you can start a transaction-log backup to restore to the latest backup that might be saved over to the DR
region.
As HANA Large Instance operations confirm having the replication relationship setup and you start the execution
storage snapshot backups, the data starts to be replicated.
As the replication progresses, the snapshots on the PRD volumes in the disaster recovery Azure regions are not
restored. They are only stored. If the volumes are mounted in such a state, they represent the state in which you
unmounted those volumes after the PRD SAP HANA instance was installed in the server unit in the disaster
recovery Azure region. They also represent the storage backups that are not yet restored.
In case of a failover, you also can choose to restore to an older storage snapshot instead of the latest storage
snapshot.

Disaster-recovery failover procedure


If you want or need to failover to the DR site, you need to interact with the SAP HANA on Azure operations team.
In rough steps, the process so far looks like this:
1. Because you are running a non-production instance of HANA on the disaster-recovery unit of HANA Large
Instances, you need to shut down this instance. We assume that there is a dormant HANA production instance
pre-installed.
2. Make sure that no SAP HANA processes are running. You use the following command for this check:
/usr/sap/hostctrl/exe/sapcontrol nr <HANA instance number> - function GetProcessList . The output should
show you the hdbdaemon process in a stopped state and no other HANA processes in a running or started
state.
3. Determine which snapshot name or SAP HANA backup ID you want to have the disaster-recovery site restored.
In real disaster-recovery cases, this snapshot is usually the latest snapshot. If you need to recover lost data, pick
an earlier snapshot.
4. Contact Azure support through a high-priority support request and ask for the restore of that snapshot (name
and date of the snapshot) or HANA backup ID on the DR site. The default is that operations restore the
/hana/data volume only. If you want to have the /hana/logbackups volumes as well, you need to specifically
state that. We are not recommending that you restore the /hana/shared volume. Instead, you should pick
specific files, like global.ini out of the .snapshot directory and its subdirectories after you remount the
/hana/shared volume for PRD. On the operations side, the following steps are going to happen: a. The
replication of snapshots from the production volume to the disaster-recovery volumes is stopped. This might
have already happened if an outage at the production site is the reason you need a DR. b. The storage
snapshot name or snapshot with the backup ID you chose is restored on the disaster-recovery volumes. c.
After the restore, the disaster-recovery volumes are available to be mounted to the HANA Large Instance units
in the disaster-recovery region.
5. Mount the disaster-recovery volumes to the HANA Large Instance unit in the disaster-recovery site.
6. Start the so far dormant SAP HANA production instance.
7. If you chose to copy transaction-log backup logs additionally to reduce the RPO time, you need to merge those
transaction-log backups into the newly mounted DR /hana/logbackups directory. Don't overwrite existing
backups. Just copy newer backups that have not been replicated with the latest replication of a storage
snapshot.
8. You also can restore single files out of the snapshots that have been replicated to the /hana/shared/PRD
volume in the disaster recovery Azure region.
The next sequence of steps involves recovering the SAP HANA production instance based on the restored storage
snapshot and the transaction-log backups that are available. The steps look like this:
1. Change the backup location to /hana/logbackups by using SAP HANA Studio.

2. SAP HANA scans through the backup file locations and suggests the most recent transaction-log backup to
restore to. The scan can take a few minutes until a screen like the following appears:
3. Adjust some of the default settings:
Clear Use Delta Backups.
Select Initialize Log Area.
4. Select Finish.
A progress window, like the one shown here, should appear. Keep in mind that the example is of a disaster-
recovery restore of a 3-node scale-out SAP HANA configuration.
If the restore seems to hang at the Finish screen and does not show the progress screen, check to confirm that all
the SAP HANA instances on the worker nodes are running. If necessary, start the SAP HANA instances manually.
Failback from DR to a production site
You can fail back from a DR to a production site. Let's look at the case that the failover into the disaster-recovery
site was caused by problems in the production Azure region and not by your need to recover lost data. This
means you have been running your SAP production workload for a while in the disaster-recovery site. As the
problems in the production site are resolved, you want to fail back to your production site. Because you can't lose
data, the step back into the production site involves several steps and close cooperation with the SAP HANA on
Azure operations team. It is up to you to trigger the operations team to start synchronizing back to the production
site after the problems are resolved.
The sequence of steps looks like this:
1. The SAP HANA on Azure operations team gets the trigger to synchronize the production storage volumes from
the disaster-recovery storage volumes, which now represent the production state. In this state, the HANA Large
Instance unit in the production site is shut down.
2. The SAP HANA on Azure operations team monitors the replication and makes sure that a catch-up is achieved
before informing you as a customer.
3. You shut down the applications that use the production HANA Instance in the disaster-recovery site. You then
perform a HANA transaction-log backup. Then you stop the HANA instance running on the HANA Large
Instance units in the disaster-recovery site.
4. After the HANA instance running in the HANA Large Instance unit in the disaster-recovery site is shut down,
the operations team manually synchronizes the disk volumes again.
5. The SAP HANA on Azure operations team starts the HANA Large Instance unit in the production site again and
hands it over to you. Make sure that the SAP HANA instance is in a shutdown state at the startup time of the
HANA Large Instance unit.
6. You perform the same database restore steps as you did when failing over to the disaster-recovery site
previously.
Monitoring disaster recovery replication
You can monitor the status of your storage replication progress by executing the script
azure_hana_replication_status.pl . This script must be run from a unit running in the disaster-recovery location.
Otherwise, it is not going to function as expected. The script works regardless of whether or not replication is
active. The script can be run for every HANA Large Instance unit of your tenant in the disaster-recovery location. It
cannot be used to obtain details about the boot volume.
Call the script like:

./replication_status.pl <HANA SID>

The output is broken down, by volume, into the following sections:


Link status
Current replication activity
Latest snapshot replicated
Size of the latest snapshot
Current lag time between snapshots--the last completed snapshot replication and now
The link status shows as Active unless the link between locations is down or a failover event is currently ongoing.
The replication activity addresses whether any data is currently being replicated or is idle, or if other activities are
currently happening to the link. The last snapshot replicated should only appear as snapmirror . The size of the
last snapshot is then displayed. Finally, the lag time is shown. The lag time represents the time from the scheduled
replication time to when the replication finishes. A lag time can be greater than an hour for data replication,
especially in the initial replication, even though replication has started. The lag time is going to continue to
increase until the ongoing replication finishes.
An example of an output can look like this:

hana_data_hm3_mnt00002_t020_dp
-------------------------------------------------
Link Status: Broken-Off
Current Replication Activity: Idle
Latest Snapshot Replicated: snapmirror.c169b434-75c0-11e6-9903-00a098a13ceb_2154095454.2017-04-21_051515
Size of Latest Snapshot Replicated: 244KB
Current Lag Time between snapshots: - ***Less than 90 minutes is acceptable***
How to troubleshoot and monitor SAP HANA (large
instances) on Azure
6/27/2017 6 min to read Edit Online

Monitoring in SAP HANA on Azure (Large Instances)


SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment you need to monitor what
the OS and the application is doing and how these consume the following resources:
CPU
Memory
Network bandwidth
Disk space
As with Azure Virtual Machines you need to figure out whether the resource classes named above are sufficient, or
whether these get depleted. Here is more detail on each of the different classes:
CPU resource consumption: The ratio that SAP defined for certain workload against HANA is enforced to make
sure that there should be enough CPU resources available to work through the data that is stored in memory.
Nevertheless, there might be cases where HANA consumes a lot of CPU executing queries due to missing indexes
or similar issues. This means you should monitor CPU resource consumption of the HANA large instance unit as
well as CPU resources consumed by the specific HANA services.
Memory consumption: Is important to monitor from within HANA, as well as outside of HANA on the unit.
Within HANA, monitor how the data is consuming HANA allocated memory in order to stay within the required
sizing guidelines of SAP. You also want to monitor memory consumption on the Large Instance level to make sure
that additional installed non-HANA software does not consume too much memory, and therefore compete with
HANA for memory.
Network bandwidth: The Azure VNet gateway is limited in bandwidth of data moving into the Azure VNet, so it is
helpful to monitor the data received by all the Azure VMs within a VNet to figure out how close you are to the
limits of the Azure gateway SKU you selected. On the HANA Large Instance unit, it does make sense to monitor
incoming and outgoing network traffic as well, and to keep track of the volumes that are handled over time.
Disk space: Disk space consumption usually increases over time. There are many reasons for this, but most of all
are: data volume increases, execution of transaction log backups, storing trace files, and performing storage
snapshots. Therefore, it is important to monitor disk space usage and manage the disk space associated with the
HANA Large Instance unit.

Monitoring and troubleshooting from HANA side


In order to effectively analyze problems related to SAP HANA on Azure (Large Instances), it is useful to narrow
down the root cause of a problem. SAP has published a large amount of documentation to help you.
Applicable FAQs related to SAP HANA performance can be found in the following SAP Notes:
SAP Note #2222200 FAQ: SAP HANA Network
SAP Note #2100040 FAQ: SAP HANA CPU
SAP Note #199997 FAQ: SAP HANA Memory
SAP Note #200000 FAQ: SAP HANA Performance Optimization
SAP Note #199930 FAQ: SAP HANA I/O Analysis
SAP Note #2177064 FAQ: SAP HANA Service Restart and Crashes
SAP HANA Alerts
As a first step, check the current SAP HANA alert logs. In SAP HANA Studio, go to Administration Console:
Alerts: Show: all alerts. This tab will show all SAP HANA alerts for specific values (free physical memory, CPU
utilization, etc.) that fall outside of the set minimum and maximum thresholds. By default, checks are auto-
refreshed every 15 minutes.

CPU
For an alert triggered due to improper threshold setting, a resolution is to reset to the default value or a more
reasonable threshold value.

The following alerts may indicate CPU resource problems:


Host CPU Usage (Alert 5)
Most recent savepoint operation (Alert 28)
Savepoint duration (Alert 54)
You may notice high CPU consumption on your SAP HANA database from one of the following:
Alert 5 (Host CPU usage) is raised for current or past CPU usage
The displayed CPU usage on the overview screen

The Load graph might show high CPU consumption, or high consumption in the past:

An alert triggered due to high CPU utilization could be caused by several reasons, including, but not limited to:
execution of certain transactions, data loading, hanging of jobs, long running SQL statements, and bad query
performance (for example, with BW on HANA cubes).
Refer to the SAP HANA Troubleshooting: CPU Related Causes and Solutions site for detailed troubleshooting steps.
Operating System
One of the most important checks for SAP HANA on Linux is to make sure that Transparent Huge Pages are
disabled, see SAP Note #2131662 Transparent Huge Pages (THP) on SAP HANA Servers.
You can check if Transparent Huge Pages are enabled through the following Linux command: cat
/sys/kernel/mm/transparent_hugepage/enabled
If always is enclosed in brackets as below, it means that the Transparent Huge Pages are enabled: [always]
madvise never; if never is enclosed in brackets as below, it means that the Transparent Huge Pages are disabled:
always madvise [never]
The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears ulimit is installed,
uninstall it immediately.
Memory
You may observe that the amount of memory allocated by the SAP HANA database is higher than expected. The
following alerts indicate issues with high memory usage:
Host physical memory usage (Alert 1)
Memory usage of name server (Alert 12)
Total memory usage of Column Store tables (Alert 40)
Memory usage of services (Alert 43)
Memory usage of main storage of Column Store tables (Alert 45)
Runtime dump files (Alert 46)
Refer to the SAP HANA Troubleshooting: Memory Problems site for detailed troubleshooting steps.
Network
Refer to SAP Note #2081065 Troubleshooting SAP HANA Network and perform the network troubleshooting
steps in this SAP Note.
1. Analyzing round-trip time between server and client. A. Run the SQL script HANA_Network_Clients.
2. Analyze internode communication. A. Run SQL script HANA_Network_Services.
3. Run Linux command ifconfig (the output shows if any packet losses are occurring).
4. Run Linux command tcpdump.
Also, use the open source IPERF tool (or similar) to measure real application network performance.
Refer to the SAP HANA Troubleshooting: Networking Performance and Connectivity Problems site for detailed
troubleshooting steps.
Storage
From an end-user perspective, an application (or the system as a whole) runs sluggishly, is unresponsive, or can
even seem to hang if there are issues with I/O performance. In the Volumes tab in SAP HANA Studio, you can see
the attached volumes, and what volumes are used by each service.

Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics.
Refer to the SAP HANA Troubleshooting: I/O Related Root Causes and Solutions and SAP HANA Troubleshooting:
Disk Related Root Causes and Solutions site for detailed troubleshooting steps.
Diagnostic Tools
Perform an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool returns potentially critical
technical issues that should have already been raised as alerts in SAP HANA Studio.
Refer to SAP Note #1969700 SQL statement collection for SAP HANA and download the SQL Statements.zip file
attached to that note. Store this .zip file on the local hard drive.
In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Import SQL
Statements.

Select the SQL Statements.zip file stored locally, and a folder with the corresponding SQL statements will be
imported. At this point, the many different diagnostic checks can be run with these SQL statements.
For example, to test SAP HANA System Replication bandwidth requirements, right-click the Bandwidth statement
under Replication: Bandwidth and select Open in SQL Console.
The complete SQL statement opens allowing input parameters (modification section) to be changed and then
executed.

Another example is right-clicking on the statements under Replication: Overview. Select Execute from the
context menu:

This results in information that helps with troubleshooting:

Do the same for HANA_Configuration_Minichecks and check for any X marks in the C (Critical) column.
Sample outputs:
HANA_Configuration_MiniChecks_Rev102.01+1 for general SAP HANA checks.

HANA_Services_Overview for an overview of what SAP HANA services are currently running.
HANA_Services_Statistics for SAP HANA service information (CPU, memory, etc.).

HANA_Configuration_Overview_Rev110+ for general information on the SAP HANA instance.


HANA_Configuration_Parameters_Rev70+ to check SAP HANA parameters.
High availability setup in SUSE using the STONITH
10/4/2017 11 min to read Edit Online

This document provides the detailed step by step instructions to setup the High Availability on SUSE Operating
system using the STONITH device.
Disclaimer: This guide is derived by testing the setup in the Microsoft HANA Large Instances environment which
successfully works. As Microsoft Service Management team for HANA Large Instances does not support Operating
system, you may need to contact SUSE for any further troubleshooting or clarification on the operating system
layer. Microsoft service management team does setup STONITH device and will be fully supportive and can be
involved for troubleshooting for STONITH device issues.

Overview
To setup the High availability using SUSE clustering, the following pre-requisites must meet.
Pre -requisites
HANA Large Instances are provisioned
Operating system is registered
HANA Large Instances servers are connected to SMT server to get patches/packages
Operating system have latest patches installed
NTP (time server) is setup
Read and understand the latest version of SUSE documentation on HA setup
Setup details
In this guide, we used the following setup.
Operating System: SUSE 12 SP1
HANA Large Instances: 2xS192 (4 sockets, 2 TB)
HANA Version: HANA 2.0 SP1
Server Names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
STONITH Device: iSCSI based STONITH device
NTP setup on one of the HANA Large Instance node
When you setup HANA Large Instances with HSR, you can request Microsoft Service Management team to setup
STONITH. If you are already an existing customer who have HANA Large Instances provisioned, and need STONITH
device setup for your existing blades, you need to provide the following information to Microsoft Service
Management team in the service request form (SRF). You can request SRF form thru the Technical Account
Manager or your Microsoft Contact for HANA Large Instance onboarding. The new customers can request STONITH
device at the time of provisioning. The inputs are available in the provisioning request form.
Server Name and Server IP address (e.g., myhanaserver1, 10.35.0.1)
Location (e.g., US East)
Customer Name (e.g., Microsoft)
Once the STONITH device is configured, Microsoft Service Management team does provide you the SBD device
name and IP address of the iSCSI storage which you can use to configure STONITH setup.
To setup the end to end HA using STONITH, the following steps needs to be followed:
1. Identify the SBD device
2. Initialize the SBD device
3. Configuring the Cluster
4. Setting Up the Softdog Watchdog
5. Join the node to the cluster
6. Validate the cluster
7. Configure the resources to the cluster
8. Test the failover process

1. Identify the SBD device


This section describes on how to determine the SBD device for your setup after Microsoft service management
team has configured the STONITH. This section only applies to the existing customer. If you are a new
customer, Microsoft service management team does provide SBD device name to you and you can skip this section.
1.1 Modify /etc/iscsi/initiatorname.isci to iqn.1996-04.de.suse:01: .
Microsoft service management does provide this string. This needs to be done on both the nodes, however the
node number is different on each node.

1.2 Modify /etc/iscsi/iscsid.conf: Set node.session.timeo.replacement_timeout=5 and node.startup = automatic. This


needs to be done on both the nodes.
1.3 Execute the discovery command, it shows four sessions. This needs to be done on both the nodes.

iscsiadm -m discovery -t st -p <IP address provided by Service Management>:3260

1.4 Execute the command to log in to the iSCSI device, it shows four sessions. This needs to be done on both the
nodes.

iscsiadm -m node -l
1.5 Execute the rescan script: rescan-scsi-bus.sh. This shows you the new disks created for you. Run it on both the
nodes. You should see a LUN number that is greater than zero (for example: 1, 2 etc.)

rescan-scsi-bus.sh

1.6 To get the device name run the command fdisk l. Run it on both the nodes. Pick the device with the size of
178MiB.

fdisk l

2. Initialize the SBD device


2.1 Initialize the SBD device on both the nodes

sbd -d <SBD Device Name> create

2.2 Check what has been written to the device. Do it on both the nodes

sbd -d <SBD Device Name> dump

3. Configuring the Cluster


This section describes the steps to setup the SUSE HA cluster.
3.1 Package installation
3.1.1 Please check that ha_sles and SAPHanaSR-doc patterns are installed. If not, please install them. This needs to
be done on both the nodes.

zypper in -t pattern ha_sles


zypper in SAPHanaSR SAPHanaSR-doc

3.2 Setting up the cluster


3.2.1 You can either use ha-cluster-init command, or use the yast2 wizard to setup the cluster. In this case, we used
yast2 wizard. You perform this step only on the Primary node.
Follow yast2> High Availability > Cluster

Click cancel as we already have the halk2 package installed.


Click Continue
Expected value=Number of nodes deployed (in this case 2)

Click Next

Add node names and then click Add suggested files


Click Turn csync2 ON
Click Generate Pre-Shared-Keys, it shows below popup
Click OK
The authentication is performed using the IP addresses and pre-shared-keys in Csync2. The key file is generated
with csync2 -k /etc/csync2/key_hagroup. The file key_hagroup should be copied to all members of the cluster
manually after it's created. Ensure to copy the file from node 1 to node2.

Click Next
In the default option, Booting was off, change it to on so pacemaker is started on boot. You can make the choice
based on your setup requirements. Click Next and the cluster configurtion is complete.

4. Setting Up the Softdog Watchdog


This section describes the configuration of the watchdog (softdog).
4.1 Add the following line to /etc/init.d/boot.local on both the nodes.

modprobe softdog

4.2 Update the file /etc/sysconfig/sbd on both the nodes as below

SBD_DEVICE="<SBD Device Name>"


4.3 Load the kernel module on both the nodes by running the following command

modprobe softdog

4.4 Check and ensure that softdog is running as below on both the nodes

lsmod | grep dog

4.5 Start the SBD device on both the nodes

/usr/share/sbd/sbd.sh start

4.6 Test the SBD daemon on both the nodes. You see two entries after you configure it on both the nodes

sbd -d <SBD Device Name> list

4.7 Send a test message to one of your nodes

sbd -d <SBD Device Name> message <node2> <message>

4.8 On the Second node (node2) you can check the message status

sbd -d <SBD Device Name> list

4.9 To adopt the sbd config, update the file /etc/sysconfig/sbd as following. This needs to be done on both the
nodes
SBD_DEVICE=" <SBD Device Name>"
SBD_WATCHDOG="yes"
SBD_PACEMAKER="yes"
SBD_STARTMODE="clean"
SBD_OPTS=""

4.10 Start the pacemaker service on the Primary node (node1)

systemctl start pacemaker

If the pacemaker service fails, refer to Scenario 5: Pacemaker service fails

5. Joining the cluster


This section describes on how to join the node to the cluster.
5.1 Add the node
Run the following command on node2 to let node2 join the cluster.

ha-cluster-join

If you receive an error during joining the cluster, refer Scenario 6: Node 2 unable to join the cluster.

6. Validating the cluster


6.1 Start the cluster service
To check and optionally start the cluster for the first time on both nodes.

systemctl status pacemaker


systemctl start pacemaker
6.2 Monitor the status
Run the command crm_mon to ensure both the nodes are online. You can run it on any of the nodes of the
cluster

crm_mon

You can also log in to hawk to check the cluster status https://:7630. The default user is hacluster and the password
is linux. If needed, you can change the password using passwd command.

7. Configure Cluster Properties and Resources


This section describes the steps to configure the cluster resources. In this example, we did setup the following
resource, the rest can be configured (if needed) by referencing the SUSE HA guide. You need to perform this config
in one of the nodes only. Do on primary node.
Cluster bootstrap
STONITH Device
The Virtual IP Address
7.1 Cluster bootstrap and more
Add cluster bootstrap. Create the file and add the text as following.

sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Add the configuration to the cluster.

crm configure load update crm-bs.txt

7.2 STONITH device


Add resource STONITH. Create the file and add the text as following.

# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15" \
op monitor interval="15" timeout="15"

Add the configuration to the cluster.

crm configure load update crm-sbd.txt

7.3 The virtual IP address


Add resource virtual IP. Create the file and add the text as below.

# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"

Add the configuration to the cluster.

crm configure load update crm-vip.txt

7.4 Validate the resources


When you run command crm_mon, you can see the two resources there.

Also, you can see the status at https://:7630/cib/live/state


8. Testing the failover process
To test the failover process, stop the pacemaker service on node1, and the resources failover to node2.

Service pacemaker stop

Now, stop the pacemaker service on node2 and resources failed over to node1
Before failover

After failover
9. Troubleshooting
This section describes the few failure scenarios, which can be encountered during the setup. You may not
necessarily face these issues.
Scenario 1: Cluster node not online
If any of the nodes does not show online in cluster manager, you can try following to bring it online.
Start the iSCSI service

service iscsid start

And now you should be able to log in to that iSCSI node

iscsiadm -m node -l

The expected output looks like following

sapprdhdb45:~ # iscsiadm -m node -l


Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260]
(multiple)
Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260]
(multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260]
successful.
Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260]
successful.

Scenario 2: yast2 does not show graphical view


We used the yast2 graphical screen to setup the High Availability cluster in this document. If yast2 does not open
with the graphical window as shown and throw Qt error, follow the steps below. If it opens with the graphical
window, you can skip the steps.
Error

Expected Output
If the yast2 does not open with the graphical view, follow the steps following.
Install the required packages. You must be logged in as user root and have SMT setup to download/install the
packages.
To install the packages, use yast>Software>Software Management>Dependencies> option Install recommended
packages. The following screenshot illustrates the expected screens.

NOTE
You need to perform the steps on both the nodes, so that you can access the yast2 graphical view from both the nodes.

Under Dependencies, select "Install Recommended Packages"

Review the changes and hit OK


Package installation proceeds

Click Next

Click Finish
You also need to install the libqt4 and libyui-qt packages.
zypper -n install libqt4

zypper -n install libyui-qt

Yast2 should be able to open the graphical view now as shown here.

Scenario 3: yast2 does not High Availability option


For the High Availability option to be visible on the yast2 control center, you need to install the additional packages.
Using Yast2>Software>Software management>Select the following patterns
SAP HANA server base
C/C++ Compiler and tools
High availability
SAP Application server base
The following screen shows the steps to install the patterns.
Using yast2 > Software > Software Management

Select the patterns


Click Accept

Click Continue
Click Next when hte installation is complete

Scenario 4: HANA Installation fails with gcc assemblies error


The HANA installation fails with following error.
To fix the issue, you need to install libraries (libgcc_sl and libstdc++6) as following.

Scenario 5: Pacemaker service fails


The following issue occurred during the pacemaker service start.

sapprdhdb95:/ # systemctl start pacemaker


A dependency job for pacemaker.service failed. See 'journalctl -xn' for details.
sapprdhdb95:/ # journalctl -xn
-- Logs begin at Thu 2017-09-28 09:28:14 EDT, end at Thu 2017-09-28 21:48:27 EDT. --
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration map
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration ser
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster closed pr
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster quorum se
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync profile loading s
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [MAIN ] Corosync Cluster Engine exiting normally
Sep 28 21:48:27 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager
-- Subject: Unit pacemaker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pacemaker.service has failed.
--
-- The result is dependency.

sapprdhdb95:/ # tail -f /var/log/messages


2017-09-28T18:44:29.675814-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
2017-09-28T18:44:29.676023-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
cluster closed process group service v1.01
2017-09-28T18:44:29.725885-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
2017-09-28T18:44:29.726069-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
cluster quorum service v0.1
2017-09-28T18:44:29.726164-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync
profile loading service
2017-09-28T18:44:29.776349-04:00 sapprdhdb95 corosync[57600]: [MAIN ] Corosync Cluster Engine exiting
normally
2017-09-28T18:44:29.778177-04:00 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability
Cluster Manager.
2017-09-28T18:44:40.141030-04:00 sapprdhdb95 systemd[1]: [/usr/lib/systemd/system/fstrim.timer:8] Unknown
lvalue 'Persistent' in section 'Timer'
2017-09-28T18:45:01.275038-04:00 sapprdhdb95 cron[57995]: pam_unix(crond:session): session opened for user root
by (uid=0)
2017-09-28T18:45:01.308066-04:00 sapprdhdb95 CRON[57995]: pam_unix(crond:session): session closed for user root

To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer

Persistent=true

Scenario 6: Node 2 unable to join the cluster


When joining the node2 to the existing cluster using ha-cluster-join command, the following error occurred.

ERROR: Cant retrieve SSH keys from <Primary Node>


To fix, run the following on both the nodes

ssh-keygen -q -f /root/.ssh/id_rsa -C 'Cluster Internal' -N ''


cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

After the preceding fix, node2 should get added to the cluster

10. General Documentation


You can find more information on SUSE HA setup in the following articles:
SAP HANA SR Performance Optimized Scenario
Storage based fencing
Quickstart: Manual installation of single-instance SAP
HANA on Azure VMs
8/21/2017 22 min to read Edit Online

Introduction
This guide helps you set up a single-instance SAP HANA on Azure virtual machines (VMs) when you install SAP
NetWeaver 7.5 and SAP HANA 1.0 SP12 manually. The focus of this guide is on deploying SAP HANA on Azure. It
does not replace SAP documentation.

NOTE
This guide describes deployments of SAP HANA into Azure VMs. For information on deploying SAP HANA into HANA large
instances, see Using SAP on Azure virtual machines (VMs).

Prerequisites
This guide assumes that you are familiar with such infrastructure as a service (IaaS) basics as:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
The Azure cross-platform command-line interface (CLI), including the option to use JavaScript Object Notation
(JSON) templates.
This guide also assumes that you are familiar with:
SAP HANA and SAP NetWeaver and how to install them on-premises.
Installing and operating SAP HANA and SAP application instances on Azure.
The following concepts and procedures:
Planning for SAP deployment on Azure, including Azure Virtual Network planning and Azure Storage
usage. See SAP NetWeaver on Azure Virtual Machines (VMs) - Planning and implementation guide.
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual Machines deployment for
SAP.
High availability for SAP NetWeaver ASCS (ABAP SAP Central Services), SCS (SAP Central Services), and
ERS (Evaluated Receipt Settlement) on Azure. See High availability for SAP NetWeaver on Azure VMs.
Details on how to improve efficiency in leveraging a multi-SID installation of ASCS/SCS on Azure. See
Create a SAP NetWeaver multi-SID configuration.
Principles of running SAP NetWeaver based on Linux-driven VMs in Azure. See Running SAP NetWeaver
on Microsoft Azure SUSE Linux VMs. This guide provides specific settings for Linux in Azure VMs and
details on how to properly attach Azure storage disks to Linux VMs.
At this time, Azure VMs are certified by SAP for SAP HANA scale-up configurations only. Scale-out configurations
with SAP HANA workloads are not yet supported. For SAP HANA high availability in cases of scale-up
configurations, see High availability of SAP HANA on Azure virtual machines (VMs).
If you are seeking to get an SAP HANA instance or S/4HANA, or BW/4HANA system deployed in very fast time, you
should consider the usage of SAP Cloud Appliance Library. You can find documentation about deploying, for
example, an S/4HANA system through SAP CAL on Azure in this guide. All you need to have is an Azure
subscription and an SAP user that can be registered with SAP Cloud Appliance Library.
Additional resources
SAP HANA backup
For information on backing up SAP HANA databases on Azure VMs, see:
Backup guide for SAP HANA on Azure Virtual Machines
SAP HANA Azure Backup on file level
SAP HANA backup based on storage snapshots
SAP Cloud Appliance Library
For information on using SAP Cloud Appliance Library to deploy S/4HANA or BW/4HANA, see Deploy SAP
S/4HANA or BW/4HANA on Microsoft Azure.
SAP HANA -supported operating systems
For information on SAP HANA-supported operating systems, see SAP Support Note #2235581 - SAP HANA:
Supported Operating Systems. Azure VMs support only a subset of these operating systems. The following
operating systems are supported to deploy SAP HANA on Azure:
SUSE Linux Enterprise Server 12.x
Red Hat Enterprise Linux 7.2
For additional SAP documentation about SAP HANA and different Linux operating systems, see:
SAP Support Note #171356 - SAP Software on Linux: General Information
SAP Support Note #1944799 - SAP HANA Guidelines for SLES Operating System Installation
SAP Support Note #2205917 - SAP HANA DB Recommended OS Settings for SLES 12 for SAP Applications
SAP Support Note #1984787 - SUSE Linux Enterprise Server 12: Installation Notes
SAP Support Note #1391070 - Linux UUID Solutions
SAP Support Note #2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP monitoring in Azure
For information about SAP monitoring in Azure, see:
SAP Note 2191498. This note discusses SAP "enhanced monitoring" with Linux VMs on Azure.
SAP Note 1102124. This note discusses information about SAPOSCOL on Linux.
SAP Note 2178632. This note discusses key monitoring metrics for SAP on Microsoft Azure.
Azure VM types
Azure VM types and SAP-supported workload scenarios used with SAP HANA are documented in SAP certified IaaS
Platforms.
Azure VM types that are certified by SAP for SAP NetWeaver or the S/4HANA application layer are documented in
SAP Note 1928533 - SAP Applications on Azure: Supported Products and Azure VM types.

NOTE
SAP-Linux-Azure integration is supported only on Azure Resource Manager and not the classic deployment model.

Manual installation of SAP HANA


This guide describes how to manually install SAP HANA on Azure VMs in two different ways:
By using SAP Software Provisioning Manager (SWPM) as part of a distributed NetWeaver installation in the
"install database instance" step
By using the SAP HANA database lifecycle manager tool, HDBLCM, and then installing NetWeaver
You can also use SWPM to install all components (SAP HANA, the SAP application server, and the ASCS instance) in
one single VM, as described in this SAP HANA blog announcement. This option isn't described in this Quickstart
guide, but the issues that you must take into consideration are the same.
Before you start an installation, we recommend that you read the "Preparing Azure VMs for manual installation of
SAP HANA" section later in this guide. Doing so can help prevent several basic mistakes that might occur when you
use only a default Azure VM configuration.

Key steps for SAP HANA installation when you use SAP SWPM
This section lists the key steps for a manual, single-instance SAP HANA installation when you use SAP SWPM to
perform a distributed SAP NetWeaver 7.5 installation. The individual steps are explained in more detail in
screenshots later in this guide.
1. Create an Azure virtual network that includes two test VMs.
2. Deploy the two Azure VMs with operating systems (in our example, SUSE Linux Enterprise Server (SLES) and
SLES for SAP Applications 12 SP1), according to the Azure Resource Manager model.
3. Attach two Azure standard or premium storage disks (for example, 75-GB or 500-GB disks) to the application
server VM.
4. Attach premium storage disks to the HANA DB server VM. For details, see the "Disk setup" section later in this
guide.
5. Depending on size or throughput requirements, attach multiple disks, and then create striped volumes by using
either logical volume management or a multiple-devices administration tool (MDADM) at the OS level inside the
VM.
6. Create XFS file systems on the attached disks or logical volumes.
7. Mount the new XFS file systems at the OS level. Use one file system for all the SAP software. Use the other file
system for the /sapmnt directory and backups, for example. On the SAP HANA DB server, mount the XFS file
systems on the premium storage disks as /hana and /usr/sap. This process is necessary to prevent the root file
system, which isn't large on Linux Azure VMs, from filling up.
8. Enter the local IP addresses of the test VMs in the /etc/hosts file.
9. Enter the nofail parameter in the /etc/fstab file.
10. Set Linux kernel parameters according to the Linux OS release you are using. For more information, see the
appropriate SAP notes that discuss HANA and the "Kernel parameters" section in this guide.
11. Add swap space.
12. Optionally, install a graphical desktop on the test VMs. Otherwise, use a remote SAPinst installation.
13. Download the SAP software from the SAP Service Marketplace.
14. Install the SAP ASCS instance on the app server VM.
15. Share the /sapmnt directory among the test VMs by using NFS. The application server VM is the NFS server.
16. Install the database instance, including HANA, by using SWPM on the DB server VM.
17. Install the primary application server (PAS) on the application server VM.
18. Start SAP Management Console (SAP MC). Connect with SAP GUI or HANA Studio, for example.

Key steps for SAP HANA installation when you use HDBLCM
This section lists the key steps for a manual, single-instance SAP HANA installation when you use SAP HDBLCM to
perform a distributed SAP NetWeaver 7.5 installation. The individual steps are explained in more detail in
screenshots throughout this guide.
1. Create an Azure virtual network that includes two test VMs.
2. Deploy two Azure VMs with operating systems (in our example, SLES and SLES for SAP Applications 12 SP1)
according to the Azure Resource Manager model.
3. Attach two Azure standard or premium storage disks (for example, 75-GB or 500-GB disks) to the app server
VM.
4. Attach premium storage disks to the HANA DB server VM. For details, see the "Disk setup" section later in this
guide.
5. Depending on size or throughput requirements, attach multiple disks and create striped volumes by using either
logical volume management or a multiple-devices administration tool (MDADM) at the OS level inside the VM.
6. Create XFS file systems on the attached disks or logical volumes.
7. Mount the new XFS file systems at the OS level. Use one file system for all the SAP software, and use the other
one for the /sapmnt directory and backups, for example. On the SAP HANA DB server, mount the XFS file
systems on the premium storage disks as /hana and /usr/sap. This process is necessary to help prevent the root
file system, which isn't large on Linux Azure VMs, from filling up.
8. Enter the local IP addresses of the test VMs in the /etc/hosts file.
9. Enter the nofail parameter in the /etc/fstab file.
10. Set kernel parameters according to the Linux OS release you are using. For more information, see the
appropriate SAP notes that discuss HANA and the "Kernel parameters" section in this guide.
11. Add swap space.
12. Optionally, install a graphical desktop on the test VMs. Otherwise, use a remote SAPinst installation.
13. Download the SAP software from the SAP Service Marketplace.
14. Create a group, sapsys, with group ID 1001, on the HANA DB server VM.
15. Install SAP HANA on the DB server VM by using HANA Database Lifecycle Manager (HDBLCM).
16. Install the SAP ASCS instance on the app server VM.
17. Share the /sapmnt directory among the test VMs by using NFS. The application server VM is the NFS server.
18. Install the database instance, including HANA, by using SWPM on the HANA DB server VM.
19. Install the primary application server (PAS) on the application server VM.
20. Start SAP MC. Connect through SAP GUI or HANA Studio.

Preparing Azure VMs for a manual installation of SAP HANA


This section covers the following topics:
OS updates
Disk setup
Kernel parameters
File systems
The /etc/hosts file
The /etc/fstab file
OS updates
Check for Linux OS updates and fixes before installing additional software. By installing a patch, you might be able
to avoid a call to the support desk.
Make sure that you are using:
SUSE Linux Enterprise Server for SAP Applications.
Red Hat Enterprise Linux for SAP Applications or Red Hat Enterprise Linux for SAP HANA.
If you haven't already, register the OS deployment with your Linux subscription from the Linux vendor. Note that
SUSE has OS images for SAP applications that already include services and which are registered automatically.
Here is an example of checking for available patches for SUSE Linux by using the zypper command:
sudo zypper list-patches

Depending on the kind of issue, patches are classified by category and severity. Commonly used values for
category are: security, recommended, optional, feature, document, or yast. Commonly used values for
severity are: critical, important, moderate, low, or unspecified.
The zypper command looks only for the updates that your installed packages need. For example, you could use
this command:
sudo zypper patch --category=security,recommended --severity=critical,important

You can add the parameter --dry-run to test the update without actually updating the system.
Disk setup
The root file system in a Linux VM on Azure has a size limitation. Therefore, it's necessary to attach additional disk
space to an Azure VM for running SAP. For SAP application server Azure VMs, the use of Azure standard storage
disks might be sufficient. However, for SAP HANA DBMS Azure VMs, the use of Azure Premium Storage disks for
production and non-production implementations is mandatory.
Based on the SAP HANA TDI Storage Requirements, the following Azure Premium Storage configuration is
suggested:

/HANA/DATA AND
/HANA/LOG
STRIPED WITH
VM SKU RAM LVM OR MDADM /HANA/SHARED /ROOT VOLUME /USR/SAP

GS5 448 GB 2 x P30 1 x P20 1 x P10 1 x P10

In the suggested disk configuration, the HANA data volume and log volume are placed on the same set of Azure
premium storage disks that are striped with LVM or MDADM. It is not necessary to define any RAID redundancy
level because Azure Premium Storage keeps three images of the disks for redundancy. To make sure that you
configure enough storage, consult the SAP HANA TDI Storage Requirements and SAP HANA Server Installation and
Update Guide. Also consider the different virtual hard disk (VHD) throughput volumes of the different Azure
premium storage disks as documented in High-performance Premium Storage and managed disks for VMs.
You can add more premium storage disks to the HANA DBMS VMs for storing database or transaction log backups.
For more information about the two main tools used to configure striping, see the following articles:
Configure software RAID on Linux
Configure LVM on a Linux VM in Azure
For more information on attaching disks to Azure VMs running Linux as a guest OS, see Add a disk to a Linux VM.
Azure Premium Storage allows you to define disk caching modes. For the striped set holding /hana/data and
/hana/log, disk caching should be disabled. For the other volumes (disks), the caching mode should be set to
ReadOnly.
For more information, see Premium Storage: High-performance storage for Azure Virtual Machine workloads.
To find sample JSON templates for creating VMs, go to Azure Quickstart Templates. The vm-simple-sles template is
a basic template. It includes a storage section, with an additional 100-GB data disk. This template can be used as a
base. You can adapt the template to your specific configuration.
NOTE
It is important to attach the Azure storage disk by using a UUID as documented in Running SAP NetWeaver on Microsoft
Azure SUSE Linux VMs.

In the test environment, two Azure standard storage disks were attached to the SAP app server VM, as shown in the
following screenshot. One disk stored all the SAP software (including NetWeaver 7.5, SAP GUI, and SAP HANA) for
installation. The second disk ensured that enough free space would be available for additional requirements (for
example, backup and test data) and for the /sapmnt directory (that is, SAP profiles) to be shared among all VMs that
belong to the same SAP landscape.

Kernel parameters
SAP HANA requires specific Linux kernel settings, which are not part of the standard Azure gallery images and must
be set manually. Depending on whether you use SUSE or Red Hat, the parameters might be different. The SAP
Notes listed earlier give information about those parameters. In the screenshots shown, SUSE Linux 12 SP1 was
used.
SLES for SAP Applications 12 GA and SLES for SAP Applications 12 SP1 have a new tool, tuned-adm, that replaces
the old sapconf tool. A special SAP HANA profile is available for tuned-adm. To tune the system for SAP HANA,
enter the following as a root user:
tuned-adm profile sap-hana

For more information about tuned-adm, see the SUSE documentation about tuned-adm.
In the following screenshot, you can see how tuned-adm changed the transparent_hugepage and numa_balancing
values, according to the required SAP HANA settings.
To make the SAP HANA kernel settings permanent, use grub2 on SLES 12. For more information about grub2, go
to the Configuration File Structure section of the SUSE documentation.
The following screenshot shows how the kernel settings were changed in the configuration file and then compiled
by using grub2-mkconfig:

Another option is to change the settings by using YaST and the Boot Loader > Kernel Parameters settings:

File systems
The following screenshot shows two file systems that were created on the SAP app server VM on top of the two
attached Azure standard storage disks. Both file systems are of type XFS and are mounted to /sapdata and
/sapsoftware.
It is not mandatory to structure your file systems in this way. You have other options for structuring the disk space.
The most important consideration is to prevent the root file system from running out of free space.

Regarding the SAP HANA DB VM, during a database installation, when you use SAPinst (SWPM) and the typical
installation option, everything is installed under /hana and /usr/sap. The default location for the SAP HANA log
backup is under /usr/sap. Again, because it's important to prevent the root file system from running out of storage
space, make sure that there is enough free space under /hana and /usr/sap before you install SAP HANA by using
SWPM.
For a description of the standard file-system layout of SAP HANA, see the SAP HANA Server Installation and Update
Guide.

When you install SAP NetWeaver on a standard SLES/SLES for SAP Applications 12 Azure gallery image, a message
is displayed that says there is no swap space, as shown in the following screenshot. To dismiss this message, you
can manually add a swap file by using dd, mkswap, and swapon. To learn how, search for "Adding a swap file
manually" in the Using the YaST Partitioner section of the SUSE documentation.
Another option is to configure swap space by using the Linux VM agent. For more information, see the Azure Linux
Agent User Guide.
The /etc/hosts file
Before you start to install SAP, make sure you include the host names and IP addresses of the SAP VMs in the
/etc/hosts file. Deploy all the SAP VMs within one Azure virtual network, and then use the internal IP addresses, as
shown here:

The /etc/fstab file


It is helpful to add the nofail parameter to the fstab file. This way, if something goes wrong with the disks, the VM
does not hang in the boot process. But remember that additional disk space might not be available, and processes
might fill up the root file system. If /hana is missing, SAP HANA won't start.
Graphical GNOME desktop on SLES 12/SLES for SAP Applications 12
This section covers the following topics:
Installing the GNOME desktop and xrdp on SLES 12/SLES for SAP Applications 12
Running Java-based SAP MC by using Firefox on SLES 12/SLES for SAP Applications 12
You can also use alternatives such as Xterminal or VNC (not described in this guide).
Installing the GNOME desktop and xrdp on SLES 12/SLES for SAP Applications 12
If you have a Windows background, you can easily use a graphical desktop directly within the SAP Linux VMs to run
Firefox, SAPinst, SAP GUI, SAP MC, or HANA Studio, and connect to the VM through the Remote Desktop Protocol
(RDP) from a Windows computer. Dependent on your company policies about adding graphical user interfaces to
production and non-production Linux based systems, you might want to install GNOME on your server. To install
the GNOME desktop on an Azure SLES 12/SLES for SAP Applications 12 VM:
1. Install the GNOME desktop by entering the following command (for example, in a PuTTY window):
zypper in -t pattern gnome-basic

2. Install xrdp to allow a connection to the VM through RDP:


zypper in xrdp

3. Edit /etc/sysconfig/windowmanager, and set the default window manager to GNOME:


DEFAULT_WM="gnome"

4. Run chkconfig to make sure that xrdp starts automatically after a reboot:
chkconfig -level 3 xrdp on

5. If you have an issue with the RDP connection, try to restart (from a PuTTY window, for example):
/etc/xrdp/xrdp.sh restart

6. If an xrdp restart mentioned in the previous step doesn't work, check for a .pid file:
check /var/run
Look for xrdp.pid . If you find it, remove it, and try to restart again.
Starting SAP MC
After you install the GNOME desktop, starting the graphical Java-based SAP MC from Firefox while running in an
Azure SLES 12/SLES for SAP Applications 12 VM might display an error because of the missing Java-browser plug-
in.
The URL to start the SAP MC is <server>:5<instance_number>13 .
For more information, see Starting the Web-Based SAP Management Console.
The following screenshot shows the error message that is displayed when the Java-browser plug-in is missing:

One way to solve the problem is to install the missing plug-in by using YaST, as shown in the following screenshot:

When you re-enter the SAP Management Console URL, a message appears asking you to activate the plug-in:
You might also receive an error message about a missing file, javafx.properties. This is related to the requirement of
Oracle Java 1.8 for SAP GUI 7.4. (See SAP Note 2059429.) Neither the IBM Java version nor the openjdk package
delivered with SLES/SLES for SAP Applications 12 includes the needed javafx.properties file. The solution is to
download and install Java SE 8 from Oracle.
For information about a similar issue with openjdk on openSUSE, see the discussion thread SAPGui 7.4 Java for
openSUSE 42.1 Leap.

Manual installation of SAP HANA: SWPM


The series of screenshots in this section shows the key steps for installing SAP NetWeaver 7.5 and SAP HANA SP12
when you use SWPM (SAPinst). As part of a NetWeaver 7.5 installation, SWPM can also install the HANA database
as a single instance.
In a sample test environment, we installed just one Advanced Business Application Programming (ABAP) app
server. As shown in the following screenshot, we used the Distributed System option to install the ASCS and
primary application server instances in one Azure VM and SAP HANA as the database system in another Azure VM.
After the ASCS instance is installed on the app server VM and is set to "green" in the SAP Management Console
(shown in the following screenshot), the /sapmnt directory (including the SAP profile directory) must be shared
with the SAP HANA DB server VM. The DB installation step needs access to this information. The best way to
provide access is to use NFS, which can be configured by using YaST.

On the app server VM, the /sapmnt directory should be shared via NFS by using the rw and no_root_squash
options. The defaults are ro and root_squash, which might lead to problems when you install the database
instance.
As the next screenshot shows, the /sapmnt share from the app server VM must be configured on the SAP HANA DB
server VM by using NFS Client (and YaST).

To perform a distributed NetWeaver 7.5 installation (Database Instance), as shown in the following screenshot,
sign in to the SAP HANA DB server VM and start SWPM.
After you select typical installation and the path to the installation media, enter a DB SID, the host name, the
instance number, and the DB system administrator password.
Enter the password for the DBACOCKPIT schema:
Enter a question for the SAPABAP1 schema password:
After each task is completed, a green check mark is displayed next to each phase of the DB installation process. The
message "Execution of ... Database Instance has completed" is displayed.
After successful installation, the SAP Management Console should also show the DB instance as "green" and
display the full list of SAP HANA processes (hdbindexserver, hdbcompileserver, and so forth).

The following screenshot shows the parts of the file structure under the /hana/shared directory that SWPM created
during the HANA installation. Because there is no option to specify a different path, it's important to mount
additional disk space under the /hana directory before the SAP HANA installation by using SWPM. This prevents
the root file system from running out of free space.
This screenshot shows the file structure of the /usr/sap directory:

The last step of the distributed ABAP installation is to install the primary application server instance:
After the primary application server instance and SAP GUI are installed, use the DBA Cockpit transaction to
confirm that the SAP HANA installation has finished correctly:
As a final step, you might want to first install HANA Studio in the SAP app server VM, and then connect to the SAP
HANA instance that's running on the DB server VM:

Manual installation of SAP HANA: HDBLCM


In addition to installing SAP HANA as part of a distributed installation by using SWPM, you can install the HANA
standalone first, by using HDBLCM. You can then install SAP NetWeaver 7.5, for example. The screenshots in this
section show how this process works.
For more information about the HANA HDBLCM tool, see:
Choosing the Correct SAP HANA HDBLCM for Your Task
SAP HANA Lifecycle Management Tools
SAP HANA Server Installation and Update Guide
To avoid problems with a default group ID setting for the \<HANA SID\>adm user (created by the HDBLCM tool),
define a new group called sapsys by using group ID 1001 before you install SAP HANA via HDBLCM:
When you start HDBLCM the first time, a simple start menu is displayed. Select item 1, Install new system, as
shown in the following screenshot:

The following screenshot displays all the key options that you selected previously.

IMPORTANT
Directories that are named for HANA log and data volumes, as well as the installation path (/hana/shared in this sample) and
/usr/sap, should not be part of the root file system. These directories belong to the Azure data disks that were attached to
the VM (described in the "Disk setup" section). This approach helps prevent the root file system from running out of space. In
the following screenshot, you can see that the HANA system administrator has user ID 1005 and is part of the sapsys
group (ID 1001 ) that was defined before the installation.

You can check the \<HANA SID\>adm user ( azdadm in the following screenshot) details in the /etc/passwd directory:
After you install SAP HANA by using HDBLCM, you can see the file structure in SAP HANA Studio, as shown in the
following screenshot. The SAPABAP1 schema, which includes all the SAP NetWeaver tables, isn't available yet.

After you install SAP HANA, you can install SAP NetWeaver on top of it. As shown in the following screenshot, the
installation was performed as a distributed installation by using SWPM (as described in the previous section). When
you install the database instance by using SWPM, you enter the same data by using HDBLCM (for example, host
name, HANA SID, and instance number). SWPM then uses the existing HANA installation and adds more schemas.
The following screenshot shows the SWPM installation step where you enter data about the DBACOCKPIT schema:
Enter data about the SAPABAP1 schema:
After the SWPM database instance installation is completed, you can see the SAPABAP1 schema in SAP HANA
Studio:

Finally, after the SAP app server and SAP GUI installations are completed, you can verify the HANA DB instance by
using the DBA Cockpit transaction:
SAP software downloads
You can download software from the SAP Service Marketplace, as shown in the following screenshots.
Download NetWeaver 7.5 for Linux/HANA:

Download HANA SP12 Platform Edition:


Deploy SAP S/4HANA or BW/4HANA on Azure
6/27/2017 5 min to read Edit Online

This article describes how to deploy S/4HANA on Azure by using the SAP Cloud Appliance Library (SAP CAL) 3.0. To
deploy other SAP HANA-based solutions, such as BW/4HANA, follow the same steps.

NOTE
For more information about the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the SAP
Cloud Appliance Library 3.0.

NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.

Step-by-step process to deploy the solution


The following sequence of screenshots shows you how to deploy S/4HANA on Azure by using the SAP CAL. The
process works the same way for other solutions, such as BW/4HANA.
The Solutions page shows some of the SAP CAL HANA-based solutions available on Azure. SAP S/4HANA 1610
FPS01, Fully-Activated Appliance is in the middle row:

Create an account in the SAP CAL


1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user registered with SAP. Then
define an SAP CAL account that is used by the SAP CAL to deploy appliances on Azure. In the account
definition, you need to:
a. Select the deployment model on Azure (Resource Manager or classic).
b. Enter your Azure subscription. An SAP CAL account can be assigned to one subscription only. If you need
more than one subscription, you need to create another SAP CAL account.
c. Give the SAP CAL permission to deploy into your Azure subscription.

NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.

2. Create a new SAP CAL account. The Accounts page shows three choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
c. Windows Azure operated by 21Vianet is an option in China that uses the classic deployment model.
To deploy in the Resource Manager model, select Microsoft Azure.

3. Enter the Azure Subscription ID that can be found on the Azure portal.

4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize. The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:

6. Click Accept. If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add.

8. To associate your account with the user that you use to sign in to the SAP CAL, click Review.
9. To create the association between your user and the newly created SAP CAL account, click Create.
You successfully created an SAP CAL account that is able to:
Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.
Now you can start to deploy S/4HANA into your user subscription in Azure.

NOTE
Before you continue, determine whether you have Azure core quotas for Azure H-Series VMs. At the moment, the SAP CAL
uses H-Series VMs of Azure to deploy some of the SAP HANA-based solutions. Your Azure subscription might not have any
H-Series core quotas for H-Series. If so, you might need to contact Azure support to get a quota of at least 16 H-Series cores.

NOTE
When you deploy a solution on Azure in the SAP CAL, you might find that you can choose only one Azure region. To deploy
into Azure regions other than the one suggested by the SAP CAL, you need to purchase a CAL subscription from SAP. You
also might need to open a message with SAP to have your CAL account enabled to deliver into Azure regions other than the
ones initially suggested.

Deploy a solution
Let's deploy a solution from the Solutions page of the SAP CAL. The SAP CAL has two sequences to deploy:
A basic sequence that uses one page to define the system to be deployed
An advanced sequence that gives you certain choices on VM sizes
We demonstrate the basic path to deployment here.
1. On the Account Details page, you need to:
a. Select an SAP CAL account. (Use an account that is associated to deploy with the Resource Manager
deployment model.)
b. Enter an instance Name.
c. Select an Azure Region. The SAP CAL suggests a region. If you need another Azure region and you don't
have an SAP CAL subscription, you need to order a CAL subscription with SAP.
d. Enter a master Password for the solution of eight or nine characters. The password is used for the
administrators of the different components.

2. Click Create, and in the message box that appears, click OK.

3. In the Private Key dialog box, click Store to store the private key in the SAP CAL. To use password
protection for the private key, click Download.
4. Read the SAP CAL Warning message, and click OK.

Now the deployment takes place. After some time, depending on the size and complexity of the solution (the
SAP CAL provides an estimate), the status is shown as active and ready for use.
5. To find the virtual machines collected with the other associated resources in one resource group, go to the
Azure portal:
6. On the SAP CAL portal, the status appears as Active. To connect to the solution, click Connect. Different
options to connect to the different components are deployed within this solution.

7. Before you can use one of the options to connect to the deployed systems, click Getting Started Guide.
The documentation names the users for each of the connectivity methods. The passwords for those users are
set to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
For example, if you use the SAP GUI that's preinstalled on the Windows Remote Desktop machine, the S/4
system might look like this:

Or if you use the DBACockpit, the instance might look like this:
Within a few hours, a healthy SAP S/4 appliance is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
High Availability of SAP HANA on Azure Virtual
Machines (VMs)
7/31/2017 16 min to read Edit Online

On-premises, you can use either HANA System Replication or use shared storage to establish high availability for
SAP HANA. We currently only support setting up HANA System Replication on Azure. SAP HANA Replication
consists of one master node and at least one slave node. Changes to the data on the master node are replicated to
the slave nodes synchronously or asynchronously.
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, install and configure SAP HANA System Replication. In the example configurations, installation
commands etc. instance number 03 and HANA System ID HDB is used.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA SR Performance Optimized Scenario The guide contains all required information to set up SAP
HANA System Replication on-premises. Use this guide as a baseline.

Deploying Linux
The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. The Azure
Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 with BYOS (Bring Your
Own Subscription) that you can use to deploy new virtual machines.
Manual Deployment
1. Create a Resource Group
2. Create a Virtual Network
3. Create two Storage Accounts
4. Create an Availability Set
Set max update domain
5. Create a Load Balancer (internal)
Select VNET of step above
6. Create Virtual Machine 1
https://portal.azure.com/#create/suse-byos.sles-for-sap-byos12-sp1
SLES For SAP Applications 12 SP1 (BYOS)
Select Storage Account 1
Select Availability Set
7. Create Virtual Machine 2
https://portal.azure.com/#create/suse-byos.sles-for-sap-byos12-sp1
SLES For SAP Applications 12 SP1 (BYOS)
Select Storage Account 2
Select Availability Set
8. Add Data Disks
9. Configure the load balancer
a. Create a frontend IP pool
a. Open the load balancer, select frontend IP pool and click Add
b. Enter the name of the new frontend IP pool (for example hana-frontend)
a. Click OK
c. After the new frontend IP pool is created, write down its IP address
b. Create a backend pool
a. Open the load balancer, select backend pools and click Add
b. Enter the name of the new backend pool (for example hana-backend)
c. Click Add a virtual machine
d. Select the Availability Set you created earlier
e. Select the virtual machines of the SAP HANA cluster
f. Click OK
c. Create a health probe
a. Open the load balancer, select health probes and click Add
a. Enter the name of the new health probe (for example hana-hp)
b. Select TCP as protocol, port 62503, keep Interval 5 and Unhealthy threshold 2
c. Click OK
d. Create load balancing rules
a. Open the load balancer, select load balancing rules and click Add
b. Enter the name of the new load balancer rule (for example hana-lb-30315)
c. Select the frontend IP address, backend pool and health probe you created earlier (for example
hana-frontend)
d. Keep protocol TCP, enter port 30315
e. Increase idle timeout to 30 minutes
a. Make sure to enable Floating IP
f. Click OK
g. Repeat the steps above for port 30317
Deploy with template
You can use one of the quick start templates on github to deploy all required resources. The template deploys the
virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the database template or the converged template on the Azure Portal The database template only creates
the load-balancing rules for a database whereas the converged template also creates the load-balancing rules
for an ASCS/SCS and ERS (Linux only) instance. If you plan to install an SAP NetWeaver based system and you
also want to install the ASCS/SCS instance on the same machines, use the converged template.
2. Enter the following parameters
a. Sap System Id
Enter the SAP system Id of the SAP system you want to install. The Id will be used as a prefix for the
resources that are deployed.
b. Stack Type (only applicable if you use the converged template)
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
d. Db Type
Select HANA
e. Sap System Size
The amount of SAPS the new system will provide. If you are not sure how many SAPS the system will
require, please ask your SAP Technology Partner or System Integrator
f. System Availability
Select HA
g. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
h. New Or Existing Subnet
Determines whether a new virtual network and subnet should be created or an existing subnet should be
used. If you already have a virtual network that is connected to your on-premises network, select existing.
i. Subnet Id
The ID of the subnet to which the virtual machines should be connected to. Select the subnet of your VPN
or Express Route virtual network to connect the virtual machine to your on-premises network. The ID
usually looks like /subscriptions/ <subscription id >/resourceGroups/ <resource group name
>/providers/Microsoft.Network/virtualNetworks/ <virtual network name >/subnets/ <subnet name >

Setting up Linux HA
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only
applicable to node 2.
1. [A] SLES for SAP BYOS only - Register SLES to be able to use the repositories
2. [A] SLES for SAP BYOS only - Add public-cloud module
3. [A] Update SLES

sudo zypper update

4. [1] Enable ssh access

sudo ssh-keygen -tdsa

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

5. [2] Enable ssh access


sudo ssh-keygen -tdsa

# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

6. [1] Enable ssh access

# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys

7. [A] Install HA extension

sudo zypper install sle-ha-release fence-agents

8. [A] Setup disk layout


a. LVM
We generally recommend to use LVM for volumes that store data and log files. The example below
assumes that the virtual machines have four data disks attached that should be used to create two
volumes.
Create physical volumes for all disks that you want to use.

sudo pvcreate /dev/sdc


sudo pvcreate /dev/sdd
sudo pvcreate /dev/sde
sudo pvcreate /dev/sdf

Create a volume group for the data files, one volume group for the log files and one for the shared
directory of SAP HANA

sudo vgcreate vg_hana_data /dev/sdc /dev/sdd


sudo vgcreate vg_hana_log /dev/sde
sudo vgcreate vg_hana_shared /dev/sdf

Create the logical volumes

sudo lvcreate -l 100%FREE -n hana_data vg_hana_data


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared
sudo mkfs.xfs /dev/vg_hana_data/hana_data
sudo mkfs.xfs /dev/vg_hana_log/hana_log
sudo mkfs.xfs /dev/vg_hana_shared/hana_shared

Create the mount directories and copy the UUID of all logical volumes
sudo mkdir -p /hana/data
sudo mkdir -p /hana/log
sudo mkdir -p /hana/shared
# write down the id of /dev/vg_hana_data/hana_data, /dev/vg_hana_log/hana_log and
/dev/vg_hana_shared/hana_shared
sudo blkid

Create fstab entries for the three logical volumes

sudo vi /etc/fstab

Insert this line to /etc/fstab

/dev/disk/by-uuid/<UUID of /dev/vg_hana_data/hana_data> /hana/data xfs defaults,nofail 0


2
/dev/disk/by-uuid/<UUID of /dev/vg_hana_log/hana_log> /hana/log xfs defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/vg_hana_shared/hana_shared> /hana/shared xfs
defaults,nofail 0 2

Mount the new volumes

sudo mount -a

b. Plain Disks
For small or demo systems, you can place your HANA data and log files on one disk. The following
commands create a partition on /dev/sdc and format it with xfs.

sudo fdisk /dev/sdc


sudo mkfs.xfs /dev/sdc1

# write down the id of /dev/sdc1


sudo /sbin/blkid
sudo vi /etc/fstab

Insert this line to /etc/fstab

/dev/disk/by-uuid/<UUID> /hana xfs defaults,nofail 0 2

Create the target directory and mount the disk.

sudo mkdir /hana


sudo mount -a

9. [A] Setup host name resolution for all hosts


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

<IP address of host 1> <hostname of host 1>


<IP address of host 2> <hostname of host 2>

10. [1] Install Cluster

sudo ha-cluster-init

# Do you want to continue anyway? [y/N] -> y


# Network address to bind to (e.g.: 192.168.1.0) [10.79.227.0] -> ENTER
# Multicast address (e.g.: 239.x.x.x) [239.174.218.125] -> ENTER
# Multicast port [5405] -> ENTER
# Do you wish to use SBD? [y/N] -> N
# Do you wish to configure an administration IP? [y/N] -> N

11. [2] Add node to cluster

sudo ha-cluster-join

# WARNING: NTP is not configured to start at system boot.


# WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a
watchdog.
# Do you want to continue anyway? [y/N] -> y
# IP address or hostname of existing node (e.g.: 192.168.1.1) [] -> IP address of node 1 e.g. 10.0.0.5
# /root/.ssh/id_dsa already exists - overwrite? [y/N] N

12. [A] Change hacluster password to the same password

sudo passwd hacluster

13. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.

sudo vi /etc/corosync/corosync.conf

Add the following bold content to the file.

[...]
interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr: < ip="" address="" of="" node="" 1="">
}
node {
ring0_addr: < ip="" address="" of="" node="" 2="">
}
}
logging {
[...]
Then restart the corosync service

sudo service corosync restart

14. [A] Install HANA HA packages

sudo zypper install SAPHanaSR

Installing SAP HANA


Follow chapter 4 of the SAP HANA SR Performance Optimized Scenario guide to install SAP HANA System
Replication.
1. [A] Run hdblcm from the HANA DVD
Choose installation -> 1
Select additional components for installation -> 1
Enter Installation Path [/hana/shared]: -> ENTER
Enter Local Host Name [..]: -> ENTER
Do you want to add additional hosts to the system? (y/n) [n]: -> ENTER
Enter SAP HANA System ID:
Enter Instance Number [00]:
HANA Instance number. Use 03 if you used the Azure Template or followed the example above
Select Database Mode / Enter Index [1]: -> ENTER
Select System Usage / Enter Index [4]:
Select the system Usage
Enter Location of Data Volumes [/hana/data/HDB]: -> ENTER
Enter Location of Log Volumes [/hana/log/HDB]: -> ENTER
Restrict maximum memory allocation? [n]: -> ENTER
Enter Certificate Host Name For Host '...' [...]: -> ENTER
Enter SAP Host Agent User (sapadm) Password:
Confirm SAP Host Agent User (sapadm) Password:
Enter System Administrator (hdbadm) Password:
Confirm System Administrator (hdbadm) Password:
Enter System Administrator Home Directory [/usr/sap/HDB/home]: -> ENTER
Enter System Administrator Login Shell [/bin/sh]: -> ENTER
Enter System Administrator User ID [1001]: -> ENTER
Enter ID of User Group (sapsys) [79]: -> ENTER
Enter Database User (SYSTEM) Password:
Confirm Database User (SYSTEM) Password:
Restart system after machine reboot? [n]: -> ENTER
Do you want to continue? (y/n):
Validate the summary and enter y to continue
2. [A] Upgrade SAP Host Agent
Download the latest SAP Host Agent archive from the SAP Softwarecenter and run the following command
to upgrade the agent. Replace the path to the archive to point to the file you downloaded.

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR>


3. [1] Create HANA replication (as root)
Run the following command. Make sure to replace bold strings (HANA System ID HDB and instance number
03) with the values of your SAP HANA installation.

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'

4. [A] Create keystore entry (as root)

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd

5. [1] Backup database (as root)

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

6. [1] Switch to the sapsid user (for example hdbadm) and create the primary site.

su - hdbadm
hdbnsutil -sr_enable -name=SITE1

7. [2] Switch to the sapsid user (for example hdbadm) and create the secondary site.

su - hdbadm
sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=saphanavm1 --remoteInstance=03 --replicationMode=sync --name=SITE2

Configure Cluster Framework


Change the default settings
sudo vi crm-defaults.txt
# enter the following to crm-defaults.txt

property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"

# now we load the file to the cluster


sudo crm configure load update crm-defaults.txt

Create STONITH device


The STONITH device uses a Service Principal to authorize against Microsoft Azure. Please follow these steps to
create a Service Principal.
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
Go to Properties and write down the Directory Id. This is the tenant id.
3. Click App registrations
4. Click Add
5. Enter a Name, select Application Type "Web app/API", enter a sign-on URL (for example http://localhost) and
click Create
6. The sign-on URL is not used and can be any valid URL
7. Select the new App and click Keys in the Settings tab
8. Enter a description for a new key, select "Never expires" and click Save
9. Write down the Value. It is used as the password for the Service Principal
10. Write down the Application Id. It is used as the username (login id in the steps below) of the Service Principal
The Service Principal does not have permissions to access your Azure resources by default. You need to give the
Service Principal permissions to start and stop (deallocate) all virtual machines of the cluster.
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine
4. Click Access control (IAM)
5. Click Add
6. Select the role Owner
7. Enter the name of the application you created above
8. Click OK
After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.
sudo vi crm-fencing.txt
# enter the following to crm-fencing.txt
# replace the bold string with your subscription id, resource group, tenant id, service principal
id and password

primitive rsc_st_azure_1 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login
id" passwd="password"

primitive rsc_st_azure_2 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login
id" passwd="password"

colocation col_st_azure -2000: rsc_st_azure_1:Started rsc_st_azure_2:Started

# now we load the file to the cluster


sudo crm configure load update crm-fencing.txt

Create SAP HANA resources

sudo vi crm-saphanatop.txt
# enter the following to crm-saphana.txt
# replace the bold string with your instance number and HANA system id

primitive rsc_SAPHanaTopology_HDB_HDB03 ocf:suse:SAPHanaTopology \


operations $id="rsc_sap2_HDB_HDB03-operations" \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="HDB" InstanceNumber="03"

clone cln_SAPHanaTopology_HDB_HDB03 rsc_SAPHanaTopology_HDB_HDB03 \


meta is-managed="true" clone-node-max="1" target-role="Started" interleave="true"

# now we load the file to the cluster


sudo crm configure load update crm-saphanatop.txt
sudo vi crm-saphana.txt
# enter the following to crm-saphana.txt
# replace the bold string with your instance number, HANA system id and the frontend IP address of
the Azure load balancer.

primitive rsc_SAPHana_HDB_HDB03 ocf:suse:SAPHana \


operations $id="rsc_sap_HDB_HDB03-operations" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="HDB" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"

ms msl_SAPHana_HDB_HDB03 rsc_SAPHana_HDB_HDB03 \
meta is-managed="true" notify="true" clone-max="2" clone-node-max="1" \
target-role="Started" interleave="true"

primitive rsc_ip_HDB_HDB03 ocf:heartbeat:IPaddr2 \


meta target-role="Started" is-managed="true" \
operations $id="rsc_ip_HDB_HDB03-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.0.0.21"
primitive rsc_nc_HDB_HDB03 anything \
params binfile="/usr/bin/nc" cmdline_options="-l -k 62503" \
op monitor timeout=20s interval=10 depth=0
group g_ip_HDB_HDB03 rsc_ip_HDB_HDB03 rsc_nc_HDB_HDB03

colocation col_saphana_ip_HDB_HDB03 2000: g_ip_HDB_HDB03:Started \


msl_SAPHana_HDB_HDB03:Master
order ord_SAPHana_HDB_HDB03 2000: cln_SAPHanaTopology_HDB_HDB03 \
msl_SAPHana_HDB_HDB03

# now we load the file to the cluster


sudo crm configure load update crm-saphana.txt

Test cluster setup


The following chapter describe how you can test your setup. Every test assumes that you are root and the SAP
HANA master is running on the virtual machine saphanavm1.
Fencing Test
You can test the setup of the fencing agent by disabling the network interface on node saphanavm1.

sudo ifdown eth0

The virtual machine should now get restarted or stopped depending on your cluster configuration. If you set the
stonith-action to off, the virtual machine will be stopped and the resources are migrated to the running virtual
machine.
Once you start the virtual machine again, the SAP HANA resource will fail to start as secondary if you set
AUTOMATED_REGISTER="false". In this case, you need to configure the HANA instance as secondary by executing
the following command:
su - hdbadm

# Stop the HANA instance just in case it is running


sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=saphanavm2 --remoteInstance=03 --replicationMode=sync --name=SITE1

# switch back to root and cleanup the failed state


exit
crm resource cleanup msl_SAPHana_HDB_HDB03 saphanavm1

Testing a manual failover


You can test a manual failover by stopping the pacemaker service on node saphanavm1.

service pacemaker stop

After the failover, you can start the service again. The SAP HANA resource on saphanavm1 will fail to start as
secondary if you set AUTOMATED_REGISTER="false". In this case, you need to configure the HANA instance as
secondary by executing the following command:

service pacemaker start


su - hdbadm

# Stop the HANA instance just in case it is running


sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=saphanavm2 --remoteInstance=03 --replicationMode=sync --name=SITE1

# switch back to root and cleanup the failed state


exit
crm resource cleanup msl_SAPHana_HDB_HDB03 saphanavm1

Testing a migration
You can migrate the SAP HANA master node by executing the following command

crm resource migrate msl_SAPHana_HDB_HDB03 saphanavm2


crm resource migrate g_ip_HDB_HDB03 saphanavm2

This should migrate the SAP HANA master node and the group that contains the virtual IP address to saphanavm2.
The SAP HANA resource on saphanavm1 will fail to start as secondary if you set AUTOMATED_REGISTER="false".
In this case, you need to configure the HANA instance as secondary by executing the following command:

su - hdbadm

# Stop the HANA instance just in case it is running


sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=saphanavm2 --remoteInstance=03 --replicationMode=sync --name=SITE1

The migration creates location contraints that need to be deleted again.


crm configure edited

# delete location contraints that are named like the following contraint. You should have two contraints, one
for the SAP HANA resource and one for the IP address group.
location cli-prefer-g_ip_HDB_HDB03 g_ip_HDB_HDB03 role=Started inf: saphanavm2

You also need to cleanup the state of the secondary node resource

# switch back to root and cleanup the failed state


exit
crm resource cleanup msl_SAPHana_HDB_HDB03 saphanavm1

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
Backup guide for SAP HANA on Azure Virtual
Machines
8/21/2017 13 min to read Edit Online

Getting Started
The backup guide for SAP HANA running on Azure virtual Machines will only describe Azure-specific topics. For
general SAP HANA backup related items, check the SAP HANA documentation (see SAP HANA backup
documentation later in this article).
The focus of this article is on two major backup possibilities for SAP HANA on Azure virtual machines:
HANA backup to the file system in an Azure Linux Virtual Machine (see SAP HANA Azure Backup on file level)
HANA backup based on storage snapshots using the Azure storage blob snapshot feature manually or Azure
Backup Service (see SAP HANA backup based on storage snapshots)
SAP HANA offers a backup API, which allows third-party backup tools to integrate directly with SAP HANA. (That is
not within the scope of this guide.) There is no direct integration of SAP HANA with Azure Backup service available
right now based on this API.
SAP HANA is officially supported on Azure VM type GS5 as single instance with an additional restriction to OLAP
workloads (see Find Certified IaaS Platforms on the SAP website). This article will be updated as new offerings for
SAP HANA on Azure become available.
There is also an SAP HANA hybrid solution available on Azure, where SAP HANA runs non-virtualized on physical
servers. However, this SAP HANA Azure backup guide covers a pure Azure environment where SAP HANA runs in
an Azure VM, not SAP HANA running on "large instances." See SAP HANA (large instances) overview and
architecture on Azure for more information about this backup solution on "large instances" based on storage
snapshots.
General information about SAP products supported on Azure can be found in SAP Note 1928533.
The following three figures give an overview of the SAP HANA backup options using native Azure capabilities
currently, and also show three potential future backup scenarios. The related articles SAP HANA Azure Backup on
file level and SAP HANA backup based on storage snapshots describe these options in more detail, including size
and performance considerations for SAP HANA backups that are multi-terabytes in size.
This figure shows the possibility of saving the current VM state, either via Azure Backup service or manual
snapshot of VM disks. With this approach, one doesn't have to manage SAP HANA backups. The challenge of the
disk snapshot scenario is file system consistency, and an application-consistent disk state. The consistency topic is
discussed in the section SAP HANA data consistency when taking storage snapshots later in this article.
Capabilities and restrictions of Azure Backup service related to SAP HANA backups are also discussed later in this
article.

This figure shows options for taking an SAP HANA file backup inside the VM, and then storing it HANA backup
files somewhere else using different tools. Taking a HANA backup requires more time than a snapshot-based
backup solution, but it has advantages regarding integrity and consistency. More details are provided later in this
article.
This figure shows a potential future SAP HANA backup scenario. If SAP HANA allowed taking backups from a
replication secondary, it would add additional options for backup strategies. Currently it isn't possible according to
a post in the SAP HANA Wiki:
"Is it possible to take backups on the secondary side?
No, currently you can only take data and log backups on the primary side. If automatic log backup is enabled,
after takeover to the secondary side, the log backups will automatically be written there."

SAP resources for HANA backup


SAP HANA backup documentation
Introduction to SAP HANA Administration
Planning Your Backup and Recovery Strategy
Schedule HANA Backup using ABAP DBACOCKPIT
Schedule Data Backups (SAP HANA Cockpit)
FAQ about SAP HANA backup in SAP Note 1642148
FAQ about SAP HANA database and storage snapshots in SAP Note 2039883
Unsuitable network file systems for backup and recovery in SAP Note 1820529
Why SAP HANA backup?
Azure storage offers availability and reliability out of the box (see Introduction to Microsoft Azure Storage for
more information about Azure storage).
The minimum for "backup" is to rely on the Azure SLAs, keeping the SAP HANA data and log files on Azure VHDs
attached to the SAP HANA server VM. This approach covers VM failures, but not potential damage to the SAP
HANA data and log files, or logical errors like deleting data or files by accident. Backups are also required for
compliance or legal reasons. In short, there is always a need for SAP HANA backups.
How to verify correctness of SAP HANA backup
When using storage snapshots, running a test restore on a different system is recommended. This approach
provides a way to ensure that a backup is correct, and internal processes for backup and restore work as expected.
While this is a significant hurdle on-premises, it is much easier to accomplish in the cloud by providing necessary
resources temporarily for this purpose.
Keep in mind that doing a simple restore and checking if HANA is up and running is not sufficient. Ideally, one
should run a table consistency check to be sure that the restored database is fine. SAP HANA offers several kinds
of consistency checks described in SAP Note 1977584.
Information about the table consistency check can also be found on the SAP website at Table and Catalog
Consistency Checks.
For standard file backups, a test restore is not necessary. There are two SAP HANA tools that help to check which
backup can be used for restore: hdbbackupdiag and hdbbackupcheck. See Manually Checking Whether a Recovery
is Possible for more information about these tools.
Pros and cons of HANA backup versus storage snapshot
SAP doesn't give preference to either HANA backup versus storage snapshot. It lists their pros and cons, so one
can determine which to use depending on the situation and available storage technology (see Planning Your
Backup and Recovery Strategy).
On Azure, be aware of the fact that the Azure blob snapshot feature doesn't guarantee file system consistency (see
Using blob snapshots with PowerShell). The next section, SAP HANA data consistency when taking storage
snapshots, discusses some considerations regarding this feature.
In addition, one has to understand the billing implications when working frequently with blob snapshots as
described in this article: Understanding How Snapshots Accrue Chargesit isn't as obvious as using Azure virtual
disks.
SAP HANA data consistency when taking storage snapshots
File system and application consistency is a complex issue when taking storage snapshots. The easiest way to
avoid problems would be to shut down SAP HANA, or maybe even the whole virtual machine. A shutdown might
be doable with a demo or prototype, or even a development system, but it is not an option for a production
system.
On Azure, one has to keep in mind that the Azure blob snapshot feature doesn't guarantee file system consistency.
It works fine however by using the SAP HANA snapshot feature, as long as there is only a single virtual disk
involved. But even with a single disk, additional items have to be checked. SAP Note 2039883 has important
information about SAP HANA backups via storage snapshots. For example, it mentions that, with the XFS file
system, it is necessary to run xfs_freeze before starting a storage snapshot to guarantee consistency (see
xfs_freeze(8) - Linux man page for details on xfs_freeze).
The topic of consistency becomes even more challenging in a case where a single file system spans multiple
disks/volumes. For example, by using mdadm or LVM and striping. The SAP Note mentioned above states:
"But keep in mind that the storage system has to guarantee I/O consistency while creating a storage snapshot per
SAP HANA data volume, i.e. snapshotting of an SAP HANA service-specific data volume must be an atomic
operation."
Assuming there is an XFS file system spanning four Azure virtual disks, the following steps provide a consistent
snapshot that represents the HANA data area:
HANA snapshot prepare
Freeze the file system (for example, use xfs_freeze)
Create all necessary blob snapshots on Azure
Unfreeze the file system
Confirm the HANA snapshot
Recommendation is to use the procedure above in all cases to be on the safe side, no matter which file system. Or
if it is a single disk or striping, via mdadm or LVM across multiple disks.
It is important to confirm the HANA snapshot. Due to the "Copy-on-Write," SAP HANA might not require
additional disk space while in this snapshot-prepare mode. It's also not possible to start new backups until the SAP
HANA snapshot is confirmed.
Azure Backup service uses Azure VM extensions to take care of the file system consistency. These VM extensions
are not available for standalone usage. One still has to manage SAP HANA consistency. See the related article SAP
HANA Azure Backup on file level for more information.
SAP HANA backup scheduling strategy
The SAP HANA article Planning Your Backup and Recovery Strategy states a basic plan to do backups:
Storage snapshot (daily)
Complete data backup using file or bacint format (once a week)
Automatic log backups
Optionally, one could go completely without storage snapshots; they could be replaced by HANA delta backups,
like incremental or differential backups (see Delta Backups).
The HANA Administration guide provides an example list. It suggests that one recover SAP HANA to a specific
point in time using the following sequence of backups:
1. Full data backup
2. Differential backup
3. Incremental backup 1
4. Incremental backup 2
5. Log backups
Regarding an exact schedule as to when and how often a specific backup type should happen, it is not possible to
give a general guidelineit is too customer-specific, and depends on how many data changes occur in the system.
One basic recommendation from SAP side, which can be seen as general guidance, is to make one full HANA
backup once a week. Regarding log backups, see the SAP HANA documentation Log Backups.
SAP also recommends doing some housekeeping of the backup catalog to keep it from growing infinitely (see
Housekeeping for Backup Catalog and Backup Storage).
SAP HANA configuration files
As stated in the FAQ in SAP Note 1642148, the SAP HANA configuration files are not part of a standard HANA
backup. They are not essential to restore a system. The HANA configuration could be changed manually after the
restore. In case one would like to get the same custom configuration during the restore process, it is necessary to
back up the HANA configuration files separately.
If standard HANA backups are going to a dedicated HANA backup file system, one could also copy the
configuration files to the same backup filesystem, and then copy everything together to the final storage
destination like cool blob storage.
SAP HANA Cockpit
SAP HANA Cockpit offers the possibility of monitoring and managing SAP HANA via a browser. It also allows
handling of SAP HANA backups, and therefore can be used as an alternative to SAP HANA Studio and ABAP
DBACOCKPIT (see SAP HANA Cockpit for more information).
This figure shows the SAP HANA Cockpit Database Administration Screen, and the backup tile on the left. Seeing
the backup tile requires appropriate user permissions for login account.

Backups can be monitored in SAP HANA Cockpit while they are ongoing and, once it is finished, all the backup
details are available.

The previous screenshots were made from an Azure Windows VM. This one is an example using Firefox on an
Azure SLES 12 VM with Gnome desktop. It shows the option to define SAP HANA backup schedules in SAP HANA
Cockpit. As one can also see, it suggests date/time as a prefix for the backup files. In SAP HANA Studio, the default
prefix is "COMPLETE_DATA_BACKUP" when doing a full file backup. Using a unique prefix is recommended.
SAP HANA backup encryption
SAP HANA offers encryption of data and log. If SAP HANA data and log are not encrypted, then the backups are
also not encrypted. It is up to the customer to use some form of third-party solution to encrypt the SAP HANA
backups. See Data and Log Volume Encryption to find out more about SAP HANA encryption.
On Microsoft Azure, a customer could use the IaaS VM encryption feature to encrypt. For example, one could use
dedicated data disks attached to the VM, which are used to store SAP HANA backups, then make copies of these
disks.
Azure Backup service can handle encrypted VMs/disks (see How to back up and restore encrypted virtual
machines with Azure Backup).
Another option would be to maintain the SAP HANA VM and its disks without encryption, and store the SAP
HANA backup files in a storage account for which encryption was enabled (see Azure Storage Service Encryption
for Data at Rest).

Test setup
Test Virtual Machine on Azure
An SAP HANA installation in an Azure GS5 VM was used for the following backup/restore tests.

This figure shows part of the Azure portal overview for the HANA test VM.
Test backup size

A dummy table was filled up with data to get a total data backup size of over 200 GB to derive realistic
performance data. The figure was taken from the backup console in HANA Studio and shows the backup file size
of 229 GB for the HANA index server. For the tests, the default backup prefix "COMPLETE_DATA_BACKUP" in SAP
HANA Studio was used. In real production systems, a more useful prefix should be defined. SAP HANA Cockpit
suggests date/time.
Test tool to copy files directly to Azure storage
To transfer SAP HANA backup files directly to Azure blob storage, or Azure file shares, the blobxfer tool was used
because it supports both targets and it can be easily integrated into automation scripts due to its command-line
interface. The blobxfer tool is available on GitHub.
Test backup size estimation
It is important to estimate the backup size of SAP HANA. This estimate helps to improve performance by defining
the max backup file size for a number of backup files, due to parallelism during a file copy. (Those details are
explained later in this article.) One must also decide whether to do a full backup or a delta backup (incremental or
differential).
Fortunately, there is a simple SQL statement that estimates the size of the backup files: select * from
M_BACKUP_SIZE_ESTIMATIONS (see Estimate the Space Needed in the File System for a Data Backup).

For the test system, the output of this SQL statement matches almost exactly the real size of the full data backup
on disk.
Test HANA backup file size

The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, that feature makes it possible to get multiple smaller backup files instead of one 230-GB backup file.
Smaller file size has a significant impact on performance (see the related article SAP HANA Azure Backup on file
level).

Summary
Based on the test results the following tables show pros and cons of solutions to back up an SAP HANA database
running on Azure virtual machines.
Back up SAP HANA to the file system and copy backup files afterwards to the final backup destination

SOLUTION PROS CONS

Keep HANA backups on VM disks No additional management efforts Eats up local VM disk space
SOLUTION PROS CONS

Blobxfer tool to copy backup files to Parallelism to copy multiple files, choice Additional tool maintenance and
blob storage to use cool blob storage custom scripting

Blob copy via Powershell or CLI No additional tool necessary, can be manual process, customer has to take
accomplished via Azure Powershell or care of scripting and management of
CLI copied blobs for restore

Copy to NFS share Post-processing of backup files on Slow copy process


other VM without impact on the HANA
server

Blobxfer copy to Azure File Service Doesn't eat up space on local VM disks No direct write support by HANA
backup, size restriction of file share
currently at 5 TB

Azure Backup Agent Would be preferred solution Currently not available on Linux

Backup SAP HANA based on storage snapshots

SOLUTION PROS CONS

Azure Backup Service Allows VM backup based on blob When not using file level restore, it
snapshots requires the creation of a new VM for
the restore process, which then implies
the need of a new SAP HANA license
key

Manual blob snapshots Flexibility to create and restore specific All manual work, which has to be done
VM disks without changing the unique by the customer
VM ID

Next steps
SAP HANA Azure Backup on file level describes the file-based backup option.
SAP HANA backup based on storage snapshots describes the storage snapshot-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA Azure Backup on file level
8/21/2017 10 min to read Edit Online

Introduction
This is part of a three-part series of related articles on SAP HANA backup. Backup guide for SAP HANA on Azure
Virtual Machines provides an overview and information on getting started, and SAP HANA backup based on
storage snapshots covers the storage snapshot-based backup option.
Looking at the Azure VM sizes, one can see that a GS5 allows 64 attached data disks. For large SAP HANA
systems, a significant number of disks might already be taken for data and log files, possibly in combination with
software RAID for optimal disk IO throughput. The question then is where to store SAP HANA backup files, which
could fill up the attached data disks over time? See Sizes for Linux virtual machines in Azure for the Azure VM size
tables.
There is no SAP HANA backup integration available with Azure Backup service at this time. The standard way to
manage backup/restore at the file level is with a file-based backup via SAP HANA Studio or via SAP HANA SQL
statements. See SAP HANA SQL and System Views Reference for more information.

This figure shows the dialog of the backup menu item in SAP HANA Studio. When choosing type "file," one has to
specify a path in the file system where SAP HANA writes the backup files. Restore works the same way.
While this choice sounds simple and straight forward, there are some considerations. As mentioned before, an
Azure VM has a limitation of number of data disks that can be attached. There might not be capacity to store SAP
HANA backup files on the file systems of the VM, depending on the size of the database and disk throughput
requirements, which might involve software RAID using striping across multiple data disks. Various options for
moving these backup files, and managing file size restrictions and performance when handling terabytes of data,
are provided later in this article.
Another option, which offers more freedom regarding total capacity, is Azure blob storage. While a single blob is
also restricted to 1 TB, the total capacity of a single blob container is currently 500 TB. Additionally, it gives
customers the choice to select so-called "cool" blob storage, which has a cost benefit. See Azure Blob Storage: Hot
and cool storage tiers for details about cool blob storage.
For additional safety, use a geo-replicated storage account to store the SAP HANA backups. See Azure Storage
replication for details about storage account replication.
One could place dedicated VHDs for SAP HANA backups in a dedicated backup storage account that is geo-
replicated. Or else one could copy the VHDs that keep the SAP HANA backups to a geo-replicated storage
account, or to a storage account that is in a different region.

Azure backup agent


Azure backup offers the option to not only back up complete VMs, but also files and directories via the backup
agent, which has to be installed on the guest OS. But as of December 2016, this agent is only supported on
Windows (see Back up a Windows Server or client to Azure using the Resource Manager deployment model).
A workaround is to first copy SAP HANA backup files to a Windows VM on Azure (for example, via SAMBA share)
and then use the Azure backup agent from there. While it is technically possible, it would add complexity and slow
down the backup or restore process quite a bit due to the copy between the Linux and the Windows VM. It is not
recommended to follow this approach.

Azure blobxfer utility details


To store directories and files on Azure storage, one could use CLI or PowerShell, or develop a tool using one of
the Azure SDKs. There is also a ready-to-use utility, AzCopy, for copying data to Azure storage, but it is Windows
only (see Transfer data with the AzCopy Command-Line Utility).
Therefore blobxfer was used for copying SAP HANA backup files. It is open source, used by many customers in
production environments, and available on GitHub. This tool allows one to copy data directly to either Azure blob
storage or Azure file share. It also offers a range of useful features, like md5 hash or automatic parallelism when
copying a directory with multiple files.

SAP HANA backup performance

This screenshot is of the SAP HANA backup console in SAP HANA Studio. It took about 42 minutes to do the
backup of the 230 GB on a single Azure standard storage disk attached to the HANA VM using XFS file system.
This screenshot is of YaST on the SAP HANA test VM. One can see the 1-TB single disk for SAP HANA backup as
mentioned before. It took about 42 minutes to backup 230 GB. In addition, five 200-GB disks were attached and
software RAID md0 created, with striping on top of these five Azure data disks.

Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks
brought the backup time from 42 minutes down to 10 minutes. The disks were attached without caching to the
VM. So it is obvious how important disk write throughput is for the backup time. One could then switch to Azure
premium storage to further accelerate the process for optimal performance. In general, Azure premium storage
should be used for production systems.

Copy SAP HANA backup files to Azure blob storage


As of December 2016, the best option to quickly store SAP HANA backup files is Azure blob storage. One single
blob container has a limit of 500 TB, enough for most SAP HANA systems, running in a GS5 VM on Azure, to keep
sufficient SAP HANA backups. Customers have the choice between "hot" and "cold" blob storage (see Azure Blob
Storage: Hot and cool storage tiers).
With the blobxfer tool, it is easy to copy the SAP HANA backup files directly to Azure blob storage.
Here one can see the files of a full SAP HANA file backup. There are four files and the biggest one has roughly 230
GB.

Not using md5 hash in the initial test, it took roughly 3000 seconds to copy the 230 GB to an Azure standard
storage account blob container.

In this screenshot, one can see how it looks on the Azure portal. A blob container named "sap-hana-backups" was
created and includes the four blobs, which represent the SAP HANA backup files. One of them has a size of
roughly 230 GB.
The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, it improved performance by making it possible to have multiple smaller backup files, instead of one
large 230-GB file.

Setting the backup file size limit on the HANA side doesn't improve the backup time, because the files are written
sequentially as shown in this figure. The file size limit was set to 60 GB, so the backup created four large data files
instead of the 230-GB single file.
To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB, which resulted
in 19 backup files. This configuration brought the time for blobxfer to copy the 230 GB to Azure blob storage from
3000 seconds down to 875 seconds.
This result is due to the limit of 60 MB/sec for writing an Azure blob. Parallelism via multiple blobs solves the
bottleneck, but there is a downside: increasing performance of the blobxfer tool to copy all these HANA backup
files to Azure blob storage puts load on both the HANA VM and the network. Operation of HANA system becomes
impacted.

Blob copy of dedicated Azure data disks in backup software RAID


Unlike the manual VM data disk backup, in this approach one does not back up all the data disks on a VM to save
the whole SAP installation, including HANA data, HANA log files, and config files. Instead, the idea is to have
dedicated software RAID with striping across multiple Azure data VHDs for storing a full SAP HANA file backup.
One copies only these disks, which have the SAP HANA backup. They could easily be kept in a dedicated HANA
backup storage account, or attached to a dedicated "backup management VM" for further processing.

After the backup to the local software RAID was completed, all VHDs involved were copied using the start-
azurestorageblobcopy PowerShell command (see Start-AzureStorageBlobCopy). As it only affects the dedicated
file system for keeping the backup files, there are no concerns about SAP HANA data or log file consistency on the
disk. A benefit of this command is that it works while the VM stays online. To be certain that no process writes to
the backup stripe set, be sure to unmount it before the blob copy, and mount it again afterwards. Or one could
use an appropriate way to "freeze" the file system. For example, via xfs_freeze for the XFS file system.
This screenshot shows the list of blobs in the "vhds" container on the Azure portal. The screenshot shows the five
VHDs, which were attached to the SAP HANA server VM to serve as the software RAID to keep SAP HANA backup
files. It also shows the five copies, which were taken via the blob copy command.

For testing purposes, the copies of the SAP HANA backup software RAID disks were attached to the app server
VM.
The app server VM was shut down to attach the disk copies. After starting the VM, the disks and the RAID were
discovered correctly (mounted via UUID). Only the mount point was missing, which was created via the YaST
partitioner. Afterwards the SAP HANA backup file copies became visible on OS level.

Copy SAP HANA backup files to NFS share


To lessen the potential impact on the SAP HANA system from a performance or disk space perspective, one might
consider storing the SAP HANA backup files on an NFS share. Technically it works, but it means using a second
Azure VM as the host of the NFS share. It should not be a small VM size, due to the VM network bandwidth. It
would make sense then to shut down this "backup VM" and only bring it up for executing the SAP HANA backup.
Writing on an NFS share puts load on the network and impacts the SAP HANA system, but merely managing the
backup files afterwards on the "backup VM" would not influence the SAP HANA system at all.

To verify the NFS use case, an NFS share from another Azure VM was mounted to the SAP HANA server VM.
There was no special NFS tuning applied.
The NFS share was a fast stripe set, like the one on the SAP HANA server. Nevertheless, it took 1 hour and 46
minutes to do the backup directly on the NFS share instead of 10 minutes, when writing to a local stripe set.

The alternative of doing a backup to a local stripe set and copying to the NFS share on OS level (a simple cp -avr
command) wasn't much quicker. It took 1 hour and 43 minutes.
So it works, but performance wasn't good for the 230-GB backup test. It would look even worse for multi
terabytes.

Copy SAP HANA backup files to Azure file service


It is possible to mount an Azure file share inside an Azure Linux VM. The article How to use Azure File storage with
Linux provides details on how to do it. Keep in mind that there is currently a 5-TB quota limit of one Azure file
share, and a file size limit of 1 TB per file. See Azure Storage Scalability and Performance Targets for information
on storage limits.
Tests have shown, however, that SAP HANA backup doesn't currently work directly with this kind of CIFS mount.
It is also stated in SAP Note 1820529 that CIFS is not recommended.
This figure shows an error in the backup dialog in SAP HANA Studio, when trying to back up directly to a CIFS-
mounted Azure file share. So one has to do a standard SAP HANA backup into a VM file system first, and then
copy the backup files from there to Azure file service.

This figure shows that it took about 929 seconds to copy 19 SAP HANA backup files with a total size of roughly
230 GB to the Azure file share.

In this screenshot, one can see that the source directory structure on the SAP HANA VM was copied to the Azure
file share: one directory (hana_backup_fsl_15gb) and 19 individual backup files.
Storing SAP HANA backup files on Azure files could be an interesting option in the future when SAP HANA file
backups support it directly. Or when it becomes possible to mount Azure files via NFS and the maximum quota
limit is considerably higher than 5 TB.

Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
SAP HANA backup based on storage snapshots describes the storage snapshot-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA backup based on storage snapshots
8/7/2017 8 min to read Edit Online

Introduction
This is part of a three-part series of related articles on SAP HANA backup. Backup guide for SAP HANA on Azure
Virtual Machines provides an overview and information on getting started, and SAP HANA Azure Backup on file
level covers the file-based backup option.
When using a VM backup feature for a single-instance all-in-one demo system, one should consider doing a VM
backup instead of managing HANA backups at the OS level. An alternative is to take Azure blob snapshots to
create copies of individual virtual disks, which are attached to a virtual machine, and keep the HANA data files. But
a critical point is app consistency when creating a VM backup or disk snapshot while the system is up and
running. See SAP HANA data consistency when taking storage snapshots in the related article Backup guide for
SAP HANA on Azure Virtual Machines. SAP HANA has a feature that supports these kinds of storage snapshots.

SAP HANA snapshots


There is a feature in SAP HANA that supports taking a storage snapshot. However, as of December 2016, there is a
restriction to single-container systems. Multitenant container configurations do not support this kind of database
snapshot (see Create a Storage Snapshot (SAP HANA Studio)).
It works as follows:
Prepare for a storage snapshot by initiating the SAP HANA snapshot
Run the storage snapshot (Azure blob snapshot, for example)
Confirm the SAP HANA snapshot

This screenshot shows that an SAP HANA data snapshot can be created via a SQL statement.
The snapshot then also appears in the backup catalog in SAP HANA Studio.

On disk, the snapshot shows up in the SAP HANA data directory.


One has to ensure that the file system consistency is also guaranteed before running the storage snapshot while
SAP HANA is in the snapshot preparation mode. See SAP HANA data consistency when taking storage snapshots
in the related article Backup guide for SAP HANA on Azure Virtual Machines.
Once the storage snapshot is done, it is critical to confirm the SAP HANA snapshot. There is a corresponding SQL
statement to run: BACKUP DATA CLOSE SNAPSHOT (see BACKUP DATA CLOSE SNAPSHOT Statement (Backup
and Recovery)).

IMPORTANT
Confirm the HANA snapshot. Due to "Copy-on-Write," SAP HANA might require additional disk space in snapshot-prepare
mode, and it is not possible to start new backups until the SAP HANA snapshot is confirmed.

HANA VM backup via Azure Backup service


As of December 2016, the backup agent of the Azure Backup service is not available for Linux VMs. To make use
of Azure backup on the file/directory level, one would copy SAP HANA backup files to a Windows VM and then
use the backup agent. Otherwise, only a full Linux VM backup is possible via the Azure Backup service. See
Overview of the features in Azure Backup to find out more.
The Azure Backup service offers an option to back up and restore a VM. More information about this service and
how it works can be found in the article Plan your VM backup infrastructure in Azure.
There are two important considerations according to that article:
"For Linux virtual machines, only file-consistent backups are possible, since Linux does not have an equivalent
platform to VSS."
"Applications need to implement their own "fix-up" mechanism on the restored data."
Therefore, one has to make sure SAP HANA is in a consistent state on disk when the backup starts. See SAP HANA
snapshots described earlier in the document. But there is a potential issue when SAP HANA stays in this snapshot
preparation mode. See Create a Storage Snapshot (SAP HANA Studio) for more information.
That article states:
"It is strongly recommended to confirm or abandon a storage snapshot as soon as possible after it has been
created. While the storage snapshot is being prepared or created, the snapshot-relevant data is frozen. While the
snapshot-relevant data remains frozen, changes can still be made in the database. Such changes will not cause
the frozen snapshot-relevant data to be changed. Instead, the changes are written to positions in the data area
that are separate from the storage snapshot. Changes are also written to the log. However, the longer the
snapshot-relevant data is kept frozen, the more the data volume can grow."
Azure Backup takes care of the file system consistency via Azure VM extensions. These extensions are not available
standalone, and work only in combination with Azure Backup service. Nevertheless, it is still a requirement to
manage an SAP HANA snapshot to guarantee app consistency.
Azure Backup has two major phases:
Take Snapshot
Transfer data to vault
So one could confirm the SAP HANA snapshot once the Azure Backup service phase of taking a snapshot is
completed. It might take several minutes to see in the Azure portal.

This figure shows part of the backup job list of an Azure Backup service, which was used to back up the HANA test
VM.

To show the job details, click the backup job in the Azure portal. Here, one can see the two phases. It might take a
few minutes until it shows the snapshot phase as completed. Most of the time is spent in the data transfer phase.

HANA VM backup automation via Azure Backup service


One could manually confirm the SAP HANA snapshot once the Azure Backup snapshot phase is completed, as
described earlier, but it is helpful to consider automation because an admin might not monitor the backup job list
in the Azure portal.
Here is an explanation how it could be accomplished via Azure PowerShell cmdlets.

An Azure Backup service was created with the name "hana-backup-vault." The PS command Get-
AzureRmRecoveryServicesVault -Name hana-backup-vault retrieves the corresponding object. This object is
then used to set the backup context as seen on the next figure.

After setting the correct context, one can check for the backup job currently in progress, and then look for its job
details. The subtask list shows if the snapshot phase of the Azure backup job is already completed:

$ars = Get-AzureRmRecoveryServicesVault -Name hana-backup-vault


Set-AzureRmRecoveryServicesVaultContext -Vault $ars
$jid = Get-AzureRmRecoveryServicesBackupJob -Status InProgress | select -ExpandProperty jobid
Get-AzureRmRecoveryServicesBackupJobDetails -Jobid $jid | select -ExpandProperty subtasks

Once the job details are stored in a variable, it is simply PS syntax to get to the first array entry and retrieve the
status value. To complete the automation script, poll the value in a loop until it turns to "Completed."

$st = Get-AzureRmRecoveryServicesBackupJobDetails -Jobid $jid | select -ExpandProperty subtasks


$st[0] | select -ExpandProperty status

HANA license key and VM restore via Azure Backup service


The Azure Backup service is designed to create a new VM during restore. There is no plan right now to do an "in-
place" restore of an existing Azure VM.
This figure shows the restore option of the Azure service in the Azure portal. One can choose between creating a
VM during restore or restoring the disks. After restoring the disks, it is still necessary to create a new VM on top of
it. Whenever a new VM gets created on Azure the unique VM ID changes (see Accessing and Using Azure VM
Unique ID).

This figure shows the Azure VM unique ID before and after the restore via Azure Backup service. The SAP
hardware key, which is used for SAP licensing, is using this unique VM ID. As a consequence, a new SAP license
has to be installed after a VM restore.
A new Azure Backup feature was presented in preview mode during the creation of this backup guide. It allows a
file level restore based on the VM snapshot that was taken for the VM backup. This avoids the need to deploy a
new VM, and therefore the unique VM ID stays the same and no new SAP HANA license key is required. More
documentation on this feature will be provided after it is fully tested.
Azure Backup will eventually allow backup of individual Azure virtual disks, plus files and directories from inside
the VM. A major advantage of Azure Backup is its management of all the backups, saving the customer from
having to do it. If a restore becomes necessary, Azure Backup will select the correct backup to use.

SAP HANA VM backup via manual disk snapshot


Instead of using the Azure Backup service, one could configure an individual backup solution by creating blob
snapshots of Azure VHDs manually via PowerShell. See Using blob snapshots with PowerShell for a description of
the steps.
It provides more flexibility but does not resolve the issues explained earlier in this document:
One still must make sure that SAP HANA is in a consistent state
The OS disk cannot be overwritten even if the VM is deallocated because of an error stating that a lease exists.
It only works after deleting the VM, which would lead to a new unique VM ID and the need to install a new SAP
license.
It is possible to restore only the data disks of an Azure VM, avoiding the problem of getting a new unique VM ID
and, therefore, invalidated the SAP license:
For the test, two Azure data disks were attached to a VM and software RAID was defined on top of them
It was confirmed that SAP HANA was in a consistent state by SAP HANA snapshot feature
File system freeze (see SAP HANA data consistency when taking storage snapshots in the related article Backup
guide for SAP HANA on Azure Virtual Machines)
Blob snapshots were taken from both data disks
File system unfreeze
SAP HANA snapshot confirmation
To restore the data disks, the VM was shut down and both disks detached
After detaching the disks, they were overwritten with the former blob snapshots
Then the restored virtual disks were attached again to the VM
After starting the VM, everything on the software RAID worked fine and was set back to the blob snapshot time
HANA was set back to the HANA snapshot
If it was possible to shut down SAP HANA before the blob snapshots, the procedure would be less complex. In that
case, one could skip the HANA snapshot and, if nothing else is going on in the system, also skip the file system
freeze. Added complexity comes into the picture when it is necessary to do snapshots while everything is online.
See SAP HANA data consistency when taking storage snapshots in the related article Backup guide for SAP HANA
on Azure Virtual Machines.

Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
SAP HANA backup based on file level covers the file-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on
Azure
6/27/2017 4 min to read Edit Online

This article describes how to deploy an SAP IDES system running with SQL Server and the Windows operating
system on Azure via the SAP Cloud Appliance Library (SAP CAL) 3.0. The screenshots show the step-by-step
process. To deploy a different solution, follow the same steps.
To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the new SAP
Cloud Appliance Library 3.0.

NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.

If you already created an SAP CAL account that uses the classic model, you need to create another SAP CAL
account. This account needs to exclusively deploy into Azure by using the Resource Manager model.
After you sign in to the SAP CAL, the first page usually leads you to the Solutions page. The solutions offered on
the SAP CAL are steadily increasing, so you might need to scroll quite a bit to find the solution you want. The
highlighted Windows-based SAP IDES solution that is available exclusively on Azure demonstrates the deployment
process:

Create an account in the SAP CAL


1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user registered with SAP. Then
define an SAP CAL account that is used by the SAP CAL to deploy appliances on Azure. In the account
definition, you need to:
a. Select the deployment model on Azure (Resource Manager or classic).
b. Enter your Azure subscription. An SAP CAL account can be assigned to one subscription only. If you need
more than one subscription, you need to create another SAP CAL account.
c. Give the SAP CAL permission to deploy into your Azure subscription.

NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.

2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.

To deploy in the Resource Manager model, select Microsoft Azure.

3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize. The following
page appears in the browser tab:

5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:

6. Click Accept. If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add.
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review.
9. To create the association between your user and the newly created SAP CAL account, click Create.

You successfully created an SAP CAL account that is able to:


Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.

NOTE
Before you can deploy the SAP IDES solution based on Windows and SQL Server, you might need to sign up for an SAP CAL
subscription. Otherwise, the solution might show up as Locked on the overview page.

Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows and SQL Server
solution. Click Create Instance, and confirm the usage and terms conditions.
2. On the Basic Mode: Create Instance page, you need to:
a. Enter an instance Name.
b. Select an Azure Region. You might need an SAP CAL subscription to get multiple Azure regions offered.
c. Enter the master Password for the solution, as shown:
3. Click Create. After some time, depending on the size and complexity of the solution (the SAP CAL provides
an estimate), the status is shown as active and ready for use:

4. To find the resource group and all its objects that were created by the SAP CAL, go to the Azure portal. The
virtual machine can be found starting with the same instance name that was given in the SAP CAL.
5. On the SAP CAL portal, go to the deployed instances and click Connect. The following pop-up window
appears:

6. Before you can use one of the options to connect to the deployed systems, click Getting Started Guide. The
documentation names the users for each of the connectivity methods. The passwords for those users are set
to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
Within a few hours, a healthy SAP IDES system is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
Running SAP NetWeaver on Microsoft Azure SUSE
Linux VMs
9/15/2017 8 min to read Edit Online

This article describes various things to consider when you're running SAP NetWeaver on Microsoft Azure SUSE
Linux virtual machines (VMs). As of May 19, 2016 SAP NetWeaver is officially supported on SUSE Linux VMs on
Azure. All details regarding Linux versions, SAP kernel versions, and other prerequisites can be found in SAP Note
1928533 "SAP Applications on Azure: Supported Products and Azure VM types". Further documentation about SAP
on Linux VMs can be found here: Using SAP on Linux virtual machines (VMs).
The following information should help you avoid some potential pitfalls.

SUSE images on Azure for running SAP


For running SAP NetWeaver on Azure, use SUSE Linux Enterprise Server SLES 12 (SPx) or SLES for SAP - see also
SAP note 1928533. A special SUSE image is in the Azure Marketplace ("SLES 11 SP3 for SAP CAL"), but the image
is not intended for general usage. Do not use this image because it's reserved for the SAP Cloud Appliance Library
solution.
You need to use the Azure Resource Manager deployment framework for all installations on Azure. To look for
SUSE SLES images and versions by using Azure PowerShell or the Azure command-line interface (CLI), use the
commands shown below. You can then use the output, for example, to define the OS image in a JSON template for
deploying a new SUSE Linux VM. These PowerShell commands are valid for Azure PowerShell version 1.0.1 and
later.
While it's still possible to use the standard SLES images for SAP installations, it's recommended to make use of the
new SLES for SAP images. These images are available now in the Azure image gallery. More information about
these images can be found on the corresponding Azure Marketplace page or the SUSE FAQ web page about SLES
for SAP.
Look for existing publishers, including SUSE:

PS : Get-AzureRmVMImagePublisher -Location "West Europe" | where-object { $_.publishername -like


"*US*" }
CLI : azure vm image list-publishers westeurope | grep "US"

Look for existing offerings from SUSE:

PS : Get-AzureRmVMImageOffer -Location "West Europe" -Publisher "SUSE"


CLI : azure vm image list-offers westeurope SUSE

Look for SUSE SLES offerings:

PS : Get-AzureRmVMImageSku -Location "West Europe" -Publisher "SUSE" -Offer "SLES"


PS : Get-AzureRmVMImageSku -Location "West Europe" -Publisher "SUSE" -Offer "SLES-SAP"
CLI : azure vm image list-skus westeurope SUSE SLES
CLI : azure vm image list-skus westeurope SUSE SLES-SAP

Look for a specific version of a SLES SKU:


PS : Get-AzureRmVMImage -Location "West Europe" -Publisher "SUSE" -Offer "SLES" -skus "12-SP2"
PS : Get-AzureRmVMImage -Location "West Europe" -Publisher "SUSE" -Offer "SLES-SAP" -skus "12-SP2"
CLI : azure vm image list westeurope SUSE SLES 12-SP2
CLI : azure vm image list westeurope SUSE SLES-SAP 12-SP2

Installing WALinuxAgent in a SUSE VM


The agent called WALinuxAgent is part of the SLES images in the Azure Marketplace. For information about
installing it manually (for example, when uploading a SLES OS virtual hard disk (VHD) from on-premises), see:
OpenSUSE
Azure
SUSE

SAP "enhanced monitoring"


SAP "enhanced monitoring" is a mandatory prerequisite to run SAP on Azure. Check details in SAP note 2191498
"SAP on Linux with Azure: Enhanced Monitoring".

Attaching Azure data disks to an Azure Linux VM


Never mount Azure data disks to an Azure Linux VM by using the device ID. Instead, use the universally unique
identifier (UUID). Be careful when you use graphical tools to mount Azure data disks, for example. Double-check
the entries in /etc/fstab.
The issue with the device ID is that it might change, and then the Azure VM might hang in the boot process. To
mitigate the issue, you could add the nofail parameter in /etc/fstab. But, be careful with nofail because applications
might use the mount point as before, and might write into the root file system in case an external Azure data disk
wasn't mounted during the boot.
The only exception to mounting via UUID is attaching an OS disk for troubleshooting purposes, as described in the
section that follow.

Troubleshooting a SUSE VM that isn't accessible anymore


There might be situations where a SUSE VM on Azure hangs in the boot process (for example, with an error related
to mounting disks). You can verify this issue by using the boot diagnostics feature for Azure Virtual Machines v2 in
the Azure portal. For more information, see Boot diagnostics.
One way to solve the problem is to attach the OS disk from the damaged VM to another SUSE VM on Azure. Then
make appropriate changes like editing /etc/fstab or removing network udev rules, as described in the next section.
There is one important thing to consider. Deploying several SUSE VMs from the same Azure Marketplace image
(for example, SLES 11 SP4) causes the OS disk to always be mounted by the same UUID. Therefore, using the UUID
to attach an OS disk from a different VM that was deployed by using the same Azure Marketplace image results in
two identical UUIDs. Two identical UUIDs cause the VM used for troubleshooting, booting from the attached and
damaged OS disk instead of the original OS disk.
There are two ways to avoid problems:
Use a different Azure Marketplace image for the troubleshooting VM (for example, SLES 11 SPx instead of SLES
12).
Don't attach the damaged OS disk from another VM by using UUID--use something else.
Uploading a SUSE VM from on-premises to Azure
For a description of the steps to upload a SUSE VM from on-premises to Azure, see Prepare a SLES or openSUSE
virtual machine for Azure.
If you want to upload a VM without the deprovision step at the end (for example, to keep an existing SAP
installation, as well as the host name), check the following items:
Make sure that the OS disk is mounted by using UUID and not the device ID. Changing to UUID just in /etc/fstab
is not enough for the OS disk. Also, don't forget to adapt the boot loader through YaST or by editing
/boot/grub/menu.lst.
If you use the VHDX format for the SUSE OS disk and convert it to VHD for uploading to Azure, it is likely that
the network device changes from eth0 to eth1. To avoid problems when you're booting on Azure later, change
back to eth0 as described in Fixing eth0 in cloned SLES 11 VMware.
In addition to what's described in the article, we recommend that you remove this file:
/lib/udev/rules.d/75-persistent-net-generator.rules
You can also install the Azure Linux Agent (waagent) to help you avoid potential issues, as long as there are not
multiple NICs.

Deploying a SUSE VM on Azure


You should create new SUSE VMs by using JSON template files in the new Azure Resource Manager model. After
the JSON template file is created, you can deploy the VM by using the following CLI command as an alternative to
PowerShell:

azure group deployment create "<deployment name>" -g "<resource group name>" --template-file "
<../../filename.json>"

For more information about JSON template files, see Authoring Azure Resource Manager templates and Azure
quickstart templates.
For more information about CLI and Azure Resource Manager, see Use the Azure CLI for Mac, Linux, and Windows
with Azure Resource Manager.

SAP license and hardware key


For the official SAP-Azure certification, a new mechanism was introduced to calculate the SAP hardware key that's
used for the SAP license. The SAP kernel had to be adapted to make use of the new algorithm. Former SAP kernel
versions for Linux did not include this code change. Therefore, in certain situations (for example, Azure VM
resizing), the SAP hardware key changed and the SAP license was no longer be valid. A solution is provided with
more recent SAP Linux kernels. The detailed SAP kernel patches are documented in SAP note 1928533.

SUSE sapconf package / tuned-adm


SUSE provides a package called "sapconf" that manages a set of SAP-specific settings. For more information about
what this package does, and how to install and use it, see: Using sapconf to prepare a SUSE Linux Enterprise Server
to run SAP systems and What is sapconf or how to prepare a SUSE Linux Enterprise Server for running SAP
systems?.
In the meantime, there is a new tool, which replaces 'sapconf - tuned-adm'. One can find more information about
this tool following the two links:
SLES documentation about 'tuned-adm' profile sap-hana can be found here
Tuning Systems for SAP Workloads with 'tuned-adm' - can be found here in chapter 6.2

NFS share in distributed SAP installations


If you have a distributed installation--for example, where you want to install the database and the SAP application
servers in separate VMs--you can share the /sapmnt directory via Network File System (NFS). If you have problems
with the installation steps after you create the NFS share for /sapmnt, check to see if "no_root_squash" is set for the
share.

Logical volumes
In the past, if one needed a large logical volume across multiple Azure data disks (for example, for the SAP
database), it was recommended to use Raid Management tool MDADM since Linux Logical Volume Manager (LVM)
was not fully validated yet on Azure. To learn how to set up Linux RAID on Azure by using mdadm, see Configure
software RAID on Linux. In the meantime, as of beginning of May 2016, Linux Logical Volume Manager is fully
supported on Azure and can be used as an alternative to MDADM. For more information regarding LVM on Azure,
read:
Configure LVM on a Linux VM in Azure.

Azure SUSE repository


If you have an issue with access to the standard Azure SUSE repository, you can use a command to reset it. Such
problems might happen if you create a private OS image in one Azure region and then copy the image to a
different Azure region to deploy new VMs that are based on this private OS image. Run the following command
inside the VM:

service guestregister restart

Gnome desktop
If you want to use the Gnome desktop to install a complete SAP demo system inside a single VM--including an SAP
GUI, browser, and SAP management console--use this hint to install it on the Azure SLES images:
For SLES 11:

zypper in -t pattern gnome

For SLES 12:

zypper in -t pattern gnome-basic

SAP support for Oracle on Linux in the cloud


There is a support restriction from Oracle on Linux in virtualized environments. Although this support restriction is
not an Azure-specific topic, it's important to understand. SAP does not support Oracle on SUSE or Red Hat in a
public cloud like Azure. To discuss this topic, contact Oracle directly.
Azure Virtual Machines planning and
implementation for SAP NetWeaver
9/8/2017 122 min to read Edit Online

NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.

Microsoft Azure enables companies to acquire compute and storage resources in minimal time without lengthy
procurement cycles. Azure Virtual Machines allow companies to deploy classical applications, like SAP
NetWeaver based applications into Azure and extend their reliability and availability without having further
resources available on-premises. Azure Virtual Machine Services also supports cross-premises connectivity,
which enables companies to actively integrate Azure Virtual Machines into their on-premises domains, their
Private Clouds and their SAP System Landscape. This white paper describes the fundamentals of Microsoft
Azure Virtual Machine and provides a walk-through of planning and implementation considerations for SAP
NetWeaver installations in Azure and as such should be the document to read before starting actual
deployments of SAP NetWeaver on Azure. The paper complements the SAP Installation Documentation and
SAP Notes, which represent the primary resources for installations and deployments of SAP software on given
platforms.

Summary
Cloud Computing is a widely used term, which is gaining more and more importance within the IT industry,
from small companies up to large and multinational corporations.
Microsoft Azure is the Cloud Services Platform from Microsoft, which offers a wide spectrum of new
possibilities. Now customers are able to rapidly provision and de-provision applications as a service in the
cloud, so they are not limited to technical or budgeting restrictions. Instead of investing time and budget into
hardware infrastructure, companies can focus on the application, business processes, and its benefits for
customers and users.
With Microsoft Azure Virtual Machine Services, Microsoft offers a comprehensive Infrastructure as a Service
(IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines (IaaS). This
whitepaper describes how to plan and implement SAP NetWeaver based applications within Microsoft Azure as
the platform of choice.
The paper itself focuses on two main aspects:
The first part describes two supported deployment patterns for SAP NetWeaver based applications on Azure.
It also describes general handling of Azure with SAP deployments in mind.
The second part details implementing the two different scenarios described in the first part.
For additional resources see chapter Resources in this document.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
ARM: Azure Resource Manager
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or EP. SAP components
can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as
Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function such as
Development, QAS, Training, DR, or Production.
SAP Landscape: This refers to the entire SAP assets in a customers IT landscape. The SAP landscape includes
all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP development
system, SAP BW test system, SAP CRM production system, etc.. In Azure deployments, it is not supported to
divide these two layers between on-premises and Azure. This means an SAP system is either deployed on-
premises or it is deployed in Azure. However, you can deploy the different systems of an SAP landscape into
either Azure or on-premises. For example, you could deploy the SAP CRM development and test systems in
Azure but the SAP CRM production system on-premises.
Cloud-Only deployment: A deployment where the Azure subscription is not connected via a site-to-site or
ExpressRoute connection to the on-premises network infrastructure. In common Azure documentation these
kinds of deployments are also described as Cloud-Only deployments. Virtual Machines deployed with this
method are accessed through the internet and a public IP address and/or a public DNS name assigned to the
VMs in Azure. For Microsoft Windows the on-premises Active Directory (AD) and DNS is not extended to
Azure in these types of deployments. Hence the VMs are not part of the on-premises Active Directory. Same
is true for Linux implementations using, for example, OpenLDAP + Kerberos.

NOTE
Cloud-Only deployment in this document is defined as complete SAP landscapes are running exclusively in Azure without
extension of Active Directory / OpenLDAP or name resolution from on-premises into public cloud. Cloud-Only
configurations are not supported for production SAP systems or configurations where SAP STMS or other on-premises
resources need to be used between SAP systems hosted on Azure and resources residing on-premises.

Cross-premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-site,
multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In common Azure
documentation, these kinds of deployments are also described as cross-premises scenarios. The reason for
the connection is to extend on-premises domains, on-premises Active Directory/OpenLDAP, and on-
premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the subscription.
Having this extension, the VMs can be part of the on-premises domain. Domain users of the on-premises
domain can access the servers and can run services on those VMs (like DBMS services). Communication and
name resolution between VMs deployed on-premises and Azure deployed VMs is possible. This is the
scenario we expect most SAP assets to be deployed in. For more information, see this article and this.

NOTE
Cross-premises deployments of SAP systems where Azure Virtual Machines running SAP systems are members of an on-
premises domain are supported for production SAP systems. Cross-premises configurations are supported for deploying
parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure requires having those
VMs being part of on-premises domain and ADS/OpenLDAP. In former versions of the documentation, we talked about
Hybrid-IT scenarios, where the term Hybrid is rooted in the fact that there is a cross-premises connectivity between on-
premises and Azure. Plus, the fact that the VMs in Azure are part of the on-premises Active Directory / OpenLDAP.

Some Microsoft documentation describes cross-premises scenarios a bit differently, especially for DBMS HA
configurations. In the case of the SAP-related documents, the cross-premises scenario just boils down to having
a site-to-site or private (ExpressRoute) connectivity and the fact that the SAP landscape is distributed between
on-premises and Azure.
Resources
The following additional guides are available for the topic of SAP deployments on Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver (this document)
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver

IMPORTANT
Wherever possible a link to the referring SAP Installation Guide is used (Reference InstGuide-01, see
http://service.sap.com/instguides). When it comes to the prerequisites and installation process, the SAP NetWeaver
Installation Guides should always be read carefully, as this document only covers specific tasks for SAP NetWeaver
systems installed in a Microsoft Azure Virtual Machine.

The following SAP Notes are related to the topic of SAP on Azure:

NOTE NUMBER TITLE

1928533 SAP Applications on Azure: Supported Products and Sizing

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

2069760 Oracle Linux 7.x SAP Installation and Upgrade

1597355 Swap-space recommendation for Linux

Also read the SCN Wiki that contains all SAP Notes for Linux.
General default limitations and maximum limitations of Azure subscriptions can be found in this article.

Possible Scenarios
SAP is often seen as one of the most mission-critical applications within enterprises. The architecture and
operations of these applications is mostly very complex and ensuring that you meet requirements on
availability and performance is important.
Thus enterprises have to think carefully about which applications can be run in a public cloud environment,
independent of the chosen cloud provider.
Possible system types for deploying SAP NetWeaver based applications within public cloud environments are
listed below:
1. Medium-sized production systems
2. Development systems
3. Testing systems
4. Prototype systems
5. Learning / Demonstration systems
In order to successfully deploy SAP systems into either Azure IaaS or IaaS in general, it is important to
understand the significant differences between the offerings of traditional outsourcers or hosters and IaaS
offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage and server type)
to the workload a customer wants to host, it is instead the customers responsibility to choose the right
workload for IaaS deployments.
As a first step, customers need to verify the following items:
The SAP supported VM types of Azure
The SAP supported products/releases on Azure
The supported OS and DBMS releases for the specific SAP releases in Azure
SAPS throughput provided by different Azure SKUs
The answers to these questions can be read in SAP Note 1928533.
As a second step, Azure resource and bandwidth limitations need to be compared to actual resource
consumption of on-premises systems. Therefore, customers need to be familiar with the different capabilities of
the Azure types supported with SAP in the area of:
CPU and memory resources of different VM types and
IOPS bandwidth of different VM types and
Network capabilities of different VM types.
Most of that data can be found here (Linux) and here (Windows).
Keep in mind that the limits listed in the link above are upper limits. It does not mean that the limits for any of
the resources, for example IOPS can be provided under all circumstances. The exceptions though are the CPU
and memory resources of a chosen VM type. For the VM types supported by SAP, the CPU and memory
resources are reserved and as such available at any point in time for consumption within the VM.
The Microsoft Azure platform like other IaaS platforms is a multi-tenant platform. This means that storage,
network, and other resources are shared between tenants. Intelligent throttling and quota logic is used to
prevent one tenant from impacting the performance of another tenant (noisy neighbor) in a drastic way.
Though logic in Azure tries to keep variances in bandwidth experienced small, highly shared platforms tend to
introduce larger variances in resource/bandwidth availability than many customers are used to in their on-
premises deployments. As a result, you might experience different levels of bandwidth regarding networking or
storage I/O (the volume as well as latency) from minute to minute. The probability that an SAP system on Azure
could experience larger variances than in an on-premises system needs to be taken into account.
A last step is to evaluate availability requirements. It can happen, that the underlying Azure infrastructure needs
to get updated and requires the hosts running VMs to be rebooted. In these cases, VMs running on those hosts
would be shut down and restarted as well. The timing of such maintenance is done during non-core business
hours for a particular region but the potential window of a few hours during which a restart will occur is
relatively wide. There are various technologies within the Azure platform that can be configured to mitigate
some or all of the impact of such updates. Future enhancements of the Azure platform, DBMS, and SAP
application are designed to minimize the impact of such restarts.
In order to successfully deploy an SAP system onto Azure, the on-premises SAP system(s) Operating System,
Database, and SAP applications must appear on the SAP Azure support matrix, fit within the resources the Azure
infrastructure can provide and which can work with the Availability SLAs Microsoft Azure offers. As those
systems are identified, you need to decide on one of the following two deployment scenarios.
Cloud-Only - Virtual Machine deployments into Azure without dependencies on the on-premises customer
network

This scenario is typical for trainings or demo systems, where all the components of SAP and non-SAP software
are installed within a single VM. Production SAP systems are not supported in this deployment scenario. In
general, this scenario meets the following requirements:
The VMs themselves are accessible over the public network. Direct network connectivity for the applications
running within the VMs to the on-premises network of either the company owning the demos or trainings
content or the customer is not necessary.
In case of multiple VMs representing the trainings or demo scenario, network communications and name
resolution needs to work between the VMs. But communications between the set of VMs need to be isolated
so that several sets of VMs can be deployed side by side without interference.
Internet connectivity is required for the end user to remote login into to the VMs hosted in Azure. Depending
on the guest OS, Terminal Services/RDS or VNC/ssh is used to access the VM to either fulfill the training
tasks or perform the demos. If SAP ports such as 3200, 3300 & 3600 can also be exposed the SAP
application instance can be accessed from any Internet connected desktop.
The SAP system(s) (and VM(s)) represent a standalone scenario in Azure, which only requires public internet
connectivity for end-user access and does not require a connection to other VMs in Azure.
SAPGUI and a browser are installed and run directly on the VM.
A fast reset of a VM to the original state and new deployment of that original state again is required.
In the case of demo and training scenarios, which are realized in multiple VMs, an Active Directory /
OpenLDAP and/or DNS service is required for each set of VMs.
It is important to keep in mind that the VM(s) in each of the sets need to be deployed in parallel, where the VM
names in each of the set are the same.
Cross-Premises - Deployment of single or multiple SAP VMs into Azure with the requirement of being fully
integrated into the on-premises network

This scenario is a cross-premises scenario with many possible deployment patterns. It can be described as
simply as running some parts of the SAP landscape on-premises and other parts of the SAP landscape on
Azure. All aspects of the fact that part of the SAP components are running on Azure should be transparent for
end users. Hence the SAP Transport Correction System (STMS), RFC Communication, Printing, Security (like
SSO), etc. work seamlessly for the SAP systems running on Azure. But the cross-premises scenario also
describes a scenario where the complete SAP landscape runs in Azure with the customers domain and DNS
extended into Azure.

NOTE
This is the deployment scenario, which is supported for running productive SAP systems.

Read this article for more information on how to connect your on-premises network to Microsoft Azure

IMPORTANT
When we are talking about cross-premises scenarios between Azure and on-premises customer deployments, we are
looking at the granularity of whole SAP systems. Scenarios which are not supported for cross-premises scenarios are:
Running different layers of SAP applications in different deployment methods. For example running the DBMS layer
on-premises, but the SAP application layer in VMs deployed as Azure VMs or vice versa.
Some components of an SAP layer in Azure and some on-premises. For example splitting Instances of the SAP
application layer between on-premises and Azure VMs.
Distribution of VMs running SAP instances of one system over multiple Azure Regions is not supported.
The reason for these restrictions is the requirement for a very low latency high-performance network within one SAP
system, especially between the application instances and the DBMS layer of an SAP system.

Supported OS and Database Releases


Microsoft server software supported for Azure Virtual Machine Services is listed in this article:
http://support.microsoft.com/kb/2721672.
Supported operating system releases, database system releases supported on Azure Virtual Machine
Services in conjunction with SAP software are documented in SAP Note 1928533.
SAP applications and releases supported on Azure Virtual Machine Services are documented in SAP Note
1928533.
Only 64Bit images are supported to run as Guest VMs in Azure for SAP scenarios. This also means that only
64-bit SAP applications and databases are supported.

Microsoft Azure Virtual Machine Services


The Microsoft Azure platform is an internet-scale cloud services platform hosted and operated in Microsoft data
centers. The platform includes the Microsoft Azure Virtual Machine Services (Infrastructure as a Service, or IaaS)
and a set of rich Platform as a Service (PaaS) capabilities.
The Azure platform reduces the need for up-front technology and infrastructure purchases. It simplifies
maintaining and operating applications by providing on-demand compute and storage to host, scale, and
manage web application and connected applications. Infrastructure management is automated with a platform
that is designed for high availability and dynamic scaling to match usage needs with the option of a pay-as-
you-go pricing model.
With Azure Virtual Machine Services, Microsoft is enabling you to deploy custom server images to Azure as IaaS
instances (see Figure 4). The Virtual Machines in Azure are based on Hyper-V virtual hard drives (VHD) and are
able to run different operating systems as Guest OS.
From an operational perspective, the Azure Virtual Machine Service offers similar experiences as virtual
machines deployed on premises. However, it has the significant advantage that you dont need to procure,
administer, and manage the infrastructure. Developers and Administrators have full control of the operating
system image within these virtual machines. Administrators can log on remotely in to those virtual machines to
perform maintenance and troubleshooting tasks as well as software deployment tasks. In regard to deployment,
the only restrictions are the sizes and capabilities of Azure VMs. These may not be as fine granular in
configuration as this could be done on premises. There is a choice of VM types that represent a combination of:
Number of vCPUs,
Memory,
Number of VHDs that can be attached,
Network and Storage bandwidths.
The size and limitations of various different virtual machines sizes offered can be seen in a table in this article
(Linux) and this article (Windows).
As you realize, there are different families or series of virtual machines. You can distinguish the following
families of VMs:
A0-A7 VM types: Not all of those are certified for SAP. First VM series that Azure IaaS got introduced with.
A8-A11 VM types: High Performance computing instances. Running on different better performing compute
hosts than other A-series VMs.
D/Dv2-Series VM types: Better performing than A0-A7. Not all of the VM types are certified with SAP.
DS/DSv2-Series VM types: Similar to D/Dv2-series, but are able to connect to Azure Premium Storage (see
chapter Azure Premium Storage of this document). Again not all VM types are certified with SAP.
G-Series VM types: High memory VM types.
GS-Series VM types: like G-Series but including the option to use Azure Premium Storage (see chapter Azure
Premium Storage of this document). When using GS-Series VMs as database servers, it's mandatory to use
Premium Storage for DB data and transaction log files
You may find the same CPU and memory configurations in different VM series. Nevertheless, when you look up
the throughput performance of these VMs out of the different series they might differ significantly. Despite
having the same CPU and memory configuration. Reason is that the underlying host server hardware at the
introduction of the different VM types had different throughput characteristics. Usually the difference shown in
throughput performance also is reflected in the price of the different VMs.
Not all different VM series might be offered in each one of the Azure Regions (for Azure Regions see next
chapter). Also be aware that not all VMs or VM-Series are certified for SAP.

IMPORTANT
For the use of SAP NetWeaver based applications, only the subset of VM types and configurations listed in SAP Note
1928533 are supported.

Azure Regions
Microsoft allows to deploy Virtual Machines into so called Azure Regions. An Azure Region may be one or
multiple data centers that are located in close proximity. For most of the geopolitical regions in the world
Microsoft has at least two Azure Regions. For example, in Europe there is an Azure Region of North Europe
and one of West Europe. Such two Azure Regions within a geopolitical region are separated by significant
enough distance so that natural or technical disasters do not affect both Azure Regions in the same geopolitical
region. Since Microsoft is steadily building out new Azure Regions in different geopolitical regions globally, the
number of these regions is steadily growing and as of Dec 2015 reached the number of 20 Azure Regions with
additional Regions announced already. You as a customer can deploy SAP systems into all these regions,
including the two Azure Regions in China. For current up-to-date information about Azure regions see this
website: https://azure.microsoft.com/regions/
The Microsoft Azure Virtual Machine Concept
Microsoft Azure offers an Infrastructure as a Service (IaaS) solution to host Virtual Machines with similar
functionalities as an on-premises virtualization solution. You are able to create Virtual Machines from within the
Azure portal, PowerShell or CLI, which also offer deployment and management capabilities.
Azure Resource Manager allows you to provision your applications using a declarative template. In a single
template, you can deploy multiple services along with their dependencies. You use the same template to
repeatedly deploy your application during every stage of the application life cycle.
More information about using Resource Manager templates can be found here:
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI]
[../../linux/create-ssh-secured-vm-from-template.md]
Manage virtual machines using Azure Resource Manager and PowerShell
https://azure.microsoft.com/documentation/templates/
Another interesting feature is the ability to create images from Virtual Machines, which allows you to prepare
certain repositories from which you are able to quickly deploy Virtual machine instances, which meet your
requirements.
More information about creating images from Virtual Machines can be found in this article (Linux) and this
article (Windows).
Fault Domains
Fault Domains represent a physical unit of failure, very closely related to the physical infrastructure contained in
data centers, and while a physical blade or rack can be considered a Fault Domain, there is no direct one-to-one
mapping between the two.
When you deploy multiple Virtual Machines as part of one SAP system in Microsoft Azure Virtual Machine
Services, you can influence the Azure Fabric Controller to deploy your application into different Fault Domains,
thereby meeting the requirements of the Microsoft Azure SLA. However, the distribution of Fault Domains over
an Azure Scale Unit (collection of hundreds of Compute nodes or Storage nodes and networking) or the
assignment of VMs to a specific Fault Domain is something over which you do not have direct control. In order
to direct the Azure fabric controller to deploy a set of VMs over different Fault Domains, you need to assign an
Azure Availability Set to the VMs at deployment time. For more information on Azure Availability Sets, see
chapter Azure Availability Sets in this document.
Upgrade Domains
Upgrade Domains represent a logical unit that help to determine how a VM within an SAP system, that consists
of SAP instances running in multiple VMs, is updated. When an upgrade occurs, Microsoft Azure goes through
the process of updating these Upgrade Domains one by one. By spreading VMs at deployment time over
different Upgrade Domains, you can protect your SAP system partly from potential downtime. In order to force
Azure to deploy the VMs of an SAP system spread over different Upgrade Domains, you need to set a specific
attribute at deployment time of each VM. Similar to Fault Domains, an Azure Scale Unit is divided into multiple
Upgrade Domains. In order to direct the Azure fabric controller to deploy a set of VMs over different Upgrade
Domains, you need to assign an Azure Availability Set to the VMs at deployment time. For more information on
Azure Availability Sets, see chapter Azure Availability Sets below.
Azure Availability Sets
Azure Virtual Machines within one Azure Availability Set are distributed by the Azure Fabric Controller over
different Fault and Upgrade Domains. The purpose of the distribution over different Fault and Upgrade
Domains is to prevent all VMs of an SAP system from being shut down in the case of infrastructure
maintenance or a failure within one Fault Domain. By default, VMs are not part of an Availability Set. The
participation of a VM in an Availability Set is defined at deployment time or later on by a reconfiguration and
re-deployment of a VM.
To understand the concept of Azure Availability Sets and the way Availability Sets relate to Fault and Upgrade
Domains, read this article
To define availability sets for ARM via a json template see the rest-api specs and search for "availability".
Storage: Microsoft Azure Storage and Data Disks
Microsoft Azure Virtual Machines utilize different storage types. When implementing SAP on Azure Virtual
Machine Services, it is important to understand the differences between these two main types of storage:
Non-Persistent, volatile storage.
Persistent storage.
The non-persistent storage is directly attached to the running Virtual Machines and resides on the compute
nodes themselves the local instance storage (temporary storage). The size depends on the size of the Virtual
Machine chosen when the deployment started. This storage type is volatile and therefore the disk is initialized
when a Virtual Machine instance is restarted. Typically, the pagefile for the operating system is located on this
temporary disk.

Windows
On Windows VMs the temp drive is mounted as drive D:\ in a deployed VM.

Linux
On Linux VMs, it's mounted as /mnt/resource or /mnt. See more details here:
How to Attach a Data Disk to a Linux Virtual Machine
https://docs.microsoft.com/azure/storage/storage-about-disks-and-vhds-linux#temporary-disk

The actual drive is volatile because it is getting stored on the host server itself. If the VM moved in a
redeployment (for example due to maintenance on the host or shutdown and restart) the content of the drive is
lost. Therefore, it is not an option to store any important data on this drive. The type of media used for this type
of storage differs between different VM series with very different performance characteristics which as of June
2015 look like:
A5-A7: Very limited performance. Not recommended for anything beyond page file
A8-A11: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
D-Series: Very good performance characteristics with some then thousand IOPS and >1GB/sec throughput.
DS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
G-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
GS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
Statements above are applying to the VM types that are certified with SAP. The VM-series with excellent IOPS
and throughput qualify for leverage by some DBMS features. For more information, see the DBMS Deployment
Guide.
Microsoft Azure Storage provides persisted storage and the typical levels of protection and redundancy seen on
SAN storage. Disks based on Azure Storage are virtual hard disk (VHDs) located in the Azure Storage Services.
The local OS-Disk (Windows C:\, Linux /dev/sda1) is stored on the Azure Storage, and additional Volumes/Disks
mounted to the VM get stored there, too.
It is possible to upload an existing VHD from on-premises or create empty ones from within Azure and attach
those to deployed VMs.
After creating or uploading a VHD into Azure Storage, it is possible to mount and attach those to an existing
Virtual Machine and to copy existing (unmounted) VHD.
As those VHDs are persisted, data and changes within those are safe when rebooting and recreating a Virtual
Machine instance. Even if an instance is deleted, these VHDs stay safe and can be redeployed or in case of non-
OS disks can be mounted to other VMs.
Within the network of Azure Storage different redundancy levels can be configured:
Minimum level that can be selected is local redundancy, which is equivalent to three-replica of the data
within the same data center of an Azure Region (see chapter Azure Regions).
Zone redundant storage, which spreads the three images over different data centers within the same Azure
Region.
Default redundancy level is geographic redundancy, which asynchronously replicates the content into
another three images of the data into another Azure Region, which is hosted in the same geopolitical region.
Also see the table on top of this article regarding the different redundancy options:
https://azure.microsoft.com/pricing/details/storage/
More information about Azure Storage can be found here:
https://azure.microsoft.com/documentation/services/storage/
https://azure.microsoft.com/services/site-recovery
https://docs.microsoft.com/rest/api/storageservices/Understanding-Block-Blobs--Append-Blobs--and-Page-
Blobs
https://blogs.msdn.com/b/azuresecurity/archive/2015/11/17/azure-disk-encryption-for-linux-and-
windows-virtual-machines-public-preview.aspx
Azure Standard Storage
Azure Standard storage was the type of storage available when Azure IaaS was released. There were IOPS
quotas enforced per single disk. Latency experienced was not in the same class as SAN/NAS devices typically
deployed for high-end SAP systems hosted on-premises. Nevertheless, the Azure Standard Storage proved
sufficient for many hundreds SAP systems meanwhile deployed in Azure.
Disks that are stored on Azure Standard Storage Accounts are charged based on the actual data that is stored,
the volume of storage transactions, outbound data transfers, and redundancy option chosen. Many disks can be
created at the maximum 1TB in size, but as long as those remain empty there is no charge. If you then fill one
VHD with 100GB each, you are charged for storing 100GB and not for the nominal size the VHD got created
with.
Azure Premium Storage
In April 2015, Microsoft introduced Azure Premium Storage. Premium Storage got introduced with the goal to
provide:
Better I/O latency.
Better throughput.
Less variability in I/O latency.
For that purpose, many changes were introduced of which the two most significant are:
Usage of SSD disks in the Azure Storage nodes
A new read cache that is backed by the local SSD of an Azure compute node
In opposite to Standard storage where capabilities did not change dependent on the size of the disk (or VHD),
Premium Storage currently has three different disk categories, which are shown in this article:
https://azure.microsoft.com/pricing/details/storage/unmanaged-disks/
You see that IOPS/disk and disk throughput/disk are dependent on the size category of the disks
Cost basis in the case of Premium Storage is not the actual data volume stored in such disks, but the size
category of such a disk, independent of the amount of the data that is stored within the disk.
You also can create disks on Premium Storage that are not directly mapping into the size categories shown. This
may be the case, especially when copying disks from Standard Storage into Premium Storage. In such cases a
mapping to the next largest Premium Storage disk option is performed.
Be aware that only certain VM series can benefit from the Azure Premium Storage. As of Dec 2015, these are
the DS- and GS-series. The DS-series is basically the same as D-series with the exception that DS-series has the
ability to mount Premium Storage based VMs additionally to disks that are hosted on Azure Standard Storage.
Same thing is valid for G-series compared to GS-series.
If you are checking out the part of the DS-series VMs in this article (Linux) and this article (Windows), you
realize that there are data volume limitations to Premium Storage disks on the granularity of the VM level.
Different DS-series or GS-series VMs also have different limitations in regards to the number of data disks that
can be mounted. These limits are documented in the article mentioned above as well. But in essence it means
that if you, for example, mount 32 x P30 disks to a single DS14 VM you can NOT get 32 x the maximum
throughput of a P30 disk. Instead the maximum throughput on VM level as documented in the article limits
data throughput.
More information on Premium Storage can be found here: http://azure.microsoft.com/blog/2015/04/16/azure-
premium-storage-now-generally-available-2
Managed Disks
Managed Disks are a new resource type in Azure Resource Manager that can be used instead of VHDs that are
stored in Azure Storage Accounts. Managed Disks automatically align with the Availability Set of the virtual
machine they are attached to and therefore increase the availability of your virtual machine and the services
that are running on the virtual machine. For more information, read the overview article.
We recommend to you use Managed disk, because they simplify the deployment and management of your
virtual machines. SAP currently only supports Premium Managed Disks. For more information, read SAP Note
1928533.
Azure Storage Accounts
When deploying services or VMs in Azure, deployment of VHDs and VM Images can be organized in units called
Azure Storage Accounts. When planning an Azure deployment, you need to carefully consider the restrictions of
Azure. On the one side, there is a limited number of Storage Accounts per Azure subscription. Although each
Azure Storage Account can hold a large number of VHD files, there is a fixed limit on the total IOPS per Storage
Account. When deploying hundreds of SAP VMs with DBMS systems creating significant IO calls, it is
recommended to distribute high IOPS DBMS VMs between multiple Azure Storage Accounts. Care must be
taken not to exceed the current limit of Azure Storage Accounts per subscription. Because storage is a vital part
of the database deployment for an SAP system, this concept is discussed in more detail in the already
referenced DBMS Deployment Guide.
More information about Azure Storage Accounts can be found in this article. Reading this article, you realize
that there are differences in the limitations between Azure Standard Storage Accounts and Premium Storage
Accounts. Major differences are the volume of data that can be stored within such a Storage Account. In
Standard Storage the volume is a magnitude larger than with Premium Storage. On the other side, the Standard
Storage Account is severely limited in IOPS (see column Total Request Rate), whereas the Azure Premium
Storage Account has no such limitation. We will discuss details and results of these differences when discussing
the deployments of SAP systems, especially the DBMS servers.
Within a Storage Account, you have the possibility to create different containers for the purpose of organizing
and categorizing different VHDs. These containers are usually used to, for example, separate VHDs of different
VMs. There are no performance implications in using just one container or multiple containers underneath a
single Azure Storage Account.
Within Azure a VHD name follows the following naming connection that needs to provide a unique name for
the VHD within Azure:

http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>

As mentioned the string above needs to uniquely identify the VHD that is stored on Azure Storage.
Microsoft Azure Networking
Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which we want to
realize with SAP software. The capabilities are:
Access from the outside, directly to the VMs via Windows Terminal Services or ssh/VNC
Access to services and specific ports used by applications within the VMs
Internal Communication and Name Resolution between a group of VMs deployed as Azure VMs
Cross-premises Connectivity between a customers on-premises network and the Azure network
Cross Azure Region or data center connectivity between Azure sites
More information can be found here: https://azure.microsoft.com/documentation/services/virtual-network/
There are many different possibilities to configure name and IP resolution in Azure. In this document, Cloud-
Only scenarios rely on the default of using Azure DNS (in contrast to defining an own DNS service). There is
also a new Azure DNS service, which can be used instead of setting up your own DNS server. More information
can be found in this article and on this page.
For cross-premises scenarios, we are relying on the fact that the on-premises AD/OpenLDAP/DNS has been
extended via VPN or private connection to Azure. For certain scenarios as documented here, it might be
necessary to have an AD/OpenLDAP replica installed in Azure.
Because networking and name resolution is a vital part of the database deployment for an SAP system, this
concept is discussed in more detail in the DBMS Deployment Guide.
A z u r e Vi r t u a l N e t w o r k s

By building up an Azure Virtual Network, you can define the address range of the private IP addresses allocated
by Azure DHCP functionality. In cross-premises scenarios, the IP address range defined is still allocated using
DHCP by Azure. However, Domain Name resolution is done on-premises (assuming that the VMs are a part of
an on-premises domain) and hence can resolve addresses beyond different Azure Cloud Services.
Every Virtual Machine in Azure needs to be connected to a Virtual Network.
More details can be found in this article and on this page.

NOTE
By default, once a VM is deployed you cannot change the Virtual Network configuration. The TCP/IP settings must be left
to the Azure DHCP server. Default behavior is Dynamic IP assignment.

The MAC address of the virtual network card may change, for example after resize and the Windows or Linux
guest OS picks up the new network card and automatically uses DHCP to assign the IP and DNS addresses in
this case.
St a t i c I P A ssi g n m e n t

It is possible to assign fixed or reserved IP addresses to VMs within an Azure Virtual Network. Running the VMs
in an Azure Virtual Network opens a great possibility to leverage this functionality if needed or required for
some scenarios. The IP assignment remains valid throughout the existence of the VM, independent of whether
the VM is running or shutdown. As a result, you need to take the overall number of VMs (running and stopped
VMS) into account when defining the range of IP addresses for the Virtual Network. The IP address remains
assigned either until the VM and its Network Interface is deleted or until the IP address gets de-assigned again.
For more information, read this article.
Mu l t i pl e N ICs per VM

You can define multiple virtual network interface cards (vNIC) for an Azure Virtual Machine. With the ability to
have multiple vNICs you can start to set up network traffic separation where, for example, client traffic is routed
through one vNIC and backend traffic is routed through a second vNIC. Dependent on the type of VM there are
different limitations in regards to the number of vNICs. Exact details, functionality, and restrictions can be found
in these articles:
Create a Windows VM with multiple NICs
Create a Linux VM with multiple NICs
Deploy multi NIC VMs using a template
Deploy multi NIC VMs using PowerShell
Deploy multi NIC VMs using the Azure CLI
Site-to-Site Connectivity
Cross-premises is Azure VMs and On-Premises linked with a transparent and permanent VPN connection. It is
expected to become the most common SAP deployment pattern in Azure. The assumption is that operational
procedures and processes with SAP instances in Azure should work transparently. This means you should be
able to print out of these systems as well as use the SAP Transport Management System (TMS) to transport
changes from a development system in Azure to a test system, which is deployed on-premises. More
documentation around site-to-site can be found in this article
VP N T u n n el Devi c e

In order to create a site-to-site connection (on-premises data center to Azure data center), you need to either
obtain and configure a VPN device, or use Routing and Remote Access Service (RRAS) which was introduced as
a software component with Windows Server 2012.
Create a virtual network with a site-to-site VPN connection using PowerShell
About VPN devices for Site-to-Site VPN Gateway connections
VPN Gateway FAQ
The Figure above shows two Azure subscriptions have IP address subranges reserved for usage in Virtual
Networks in Azure. The connectivity from the on-premises network to Azure is established via VPN.
Point-to-Site VPN
Point-to-site VPN requires every client machine to connect with its own VPN into Azure. For the SAP scenarios
we are looking at, point-to-site connectivity is not practical. Therefore, no further references are given to point-
to-site VPN connectivity.
More information can be found here
Configure a Point-to-Site connection to a VNet using the Azure portal
Configure a Point-to-Site connection to a VNet using PowerShell
Multi-Site VPN
Azure also nowadays offers the possibility to create Multi-Site VPN connectivity for one Azure subscription.
Previously a single subscription was limited to one site-to-site VPN connection. This limitation went away with
Multi-Site VPN connections for a single subscription. This makes it possible to leverage more than one Azure
Region for a specific subscription through cross-premises configurations.
For more documentation, please see this article
VNet to VNet Connection
Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions. However
very often you have the requirement that the software components in the different regions should
communicate with each other. Ideally this communication should not be routed from one Azure Region to on-
premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a
connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another
region. This functionality is called VNet-to-VNet connection. More details on this functionality can be found
here: https://azure.microsoft.com/documentation/articles/vpn-gateway-vnet-vnet-rm-ps/.
Private Connection to Azure ExpressRoute
Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers and either
the customer's on-premises infrastructure or in a co-location environment. ExpressRoute is offered by various
MPLS (packet switched) VPN providers or other Network Service Providers. ExpressRoute connections do not
go over the public Internet. ExpressRoute connections offer higher security, more reliability through multiple
parallel circuits, faster speeds, and lower latencies than typical connections over the Internet.
Find more details on Azure ExpressRoute and offerings here:
https://azure.microsoft.com/documentation/services/expressroute/
https://azure.microsoft.com/pricing/details/expressroute/
https://azure.microsoft.com/documentation/articles/expressroute-faqs/
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here
https://azure.microsoft.com/documentation/articles/expressroute-howto-linkvnet-arm/
https://azure.microsoft.com/documentation/articles/expressroute-howto-circuit-arm/
Forced tunneling in case of cross-premises
For VMs joining on-premises domains through site-to-site, point-to-site or ExpressRoute, you need to make
sure that the Internet proxy settings are getting deployed for all the users in those VMs as well. By default,
software running in those VMs or users using a browser to access the internet would not go through the
company proxy, but would connect straight through Azure to the internet. But even the proxy setting is not a
100% solution to direct the traffic through the company proxy since it is responsibility of software and services
to check for the proxy. If software running in the VM is not doing that or an administrator manipulates the
settings, traffic to the Internet can be detoured again directly through Azure to the Internet.
In order to avoid this, you can configure Forced Tunneling with site-to-site connectivity between on-premises
and Azure. The detailed description of the Forced Tunneling feature is published here
https://azure.microsoft.com/documentation/articles/vpn-gateway-forced-tunneling-rm/
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the ExpressRoute
BGP peering sessions.
Summary of Azure Networking
This chapter contained many important points about Azure Networking. Here is a summary of the main points:
Azure Virtual Networks allows to set up the network according to your own needs
Azure Virtual Networks can be leveraged to assign IP address ranges to VMs or assign fixed IP addresses to
VMs
To set up a Site-To-Site or Point-To-Site connection you need to create an Azure Virtual Network first
Once a virtual machine has been deployed, it is no longer possible to change the Virtual Network assigned
to the VM
Quotas in Azure Virtual Machine Services
We need to be clear about the fact that the storage and network infrastructure is shared between VMs running
a variety of services in the Azure infrastructure. And just as in the customers own data centers, over-
provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure Platform
uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve consistent and
deterministic performance. The different VM types (A5, A6, etc.) have different quotas for the number of disks,
CPU, RAM, and Network.

NOTE
CPU and memory resources of the VM types supported by SAP are pre-allocated on the host nodes. This means that
once the VM is deployed, the resources on the host are available as defined by the VM type.

When planning and sizing SAP on Azure solutions the quotas for each virtual machine size must be considered.
The VM quotas are described here (Linux) and here (Windows).
The quotas described represent the theoretical maximum values. The limit of IOPS per disk may be achieved
with small IOs (8kb) but possibly may not be achieved with large IOs (1Mb). The IOPS limit is enforced on the
granularity of single disk.
As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and its
capabilities or whether an existing system needs to be configured differently in order to deploy the system on
Azure, the decision tree below can be used:

Step 1: The most important information to start with is the SAPS requirement for a given SAP system. The SAPS
requirements need to be separated out into the DBMS part and the SAP application part, even if the SAP system
is already deployed on-premises in a 2-tier configuration. For existing systems, the SAPS related to the
hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be
found here: http://global.sap.com/campaigns/benchmark/index.epx. For newly deployed SAP systems, you
should have gone through a sizing exercise, which should determine the SAPS requirements of the system. See
also this blog and attached document for SAP sizing on Azure:
http://blogs.msdn.com/b/saponsqlserver/archive/2015/12/01/new-white-paper-on-sizing-sap-solutions-on-
azure-public-cloud.aspx
Step 2: For existing systems, the I/O volume and I/O operations per second on the DBMS server should be
measured. For newly planned systems, the sizing exercise for the new system also should give rough ideas of
the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a Proof of Concept.
Step 3: Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure can
provide. The information on SAPS of the different Azure VM types is documented in SAP Note 1928533. The
focus should be on the DBMS VM first since the database layer is the layer in an SAP NetWeaver system that
does not scale out in the majority of deployments. In contrast, the SAP application layer can be scaled out. If
none of the SAP supported Azure VM types can deliver the required SAPS, the workload of the planned SAP
system cant be run on Azure. You either need to deploy the system on-premises or you need to change the
workload volume for the system.
Step 4: As documented here (Linux) and here (Windows), Azure enforces an IOPS quota per disk independent
whether you use Standard Storage or Premium Storage. Dependent on the VM type, the number of data disks,
which can be mounted varies. As a result, you can calculate a maximum IOPS number that can be achieved with
each of the different VM types. Dependent on the database file layout, you can stripe disks to become one
volume in the guest OS. However, if the current IOPS volume of a deployed SAP system exceeds the calculated
limits of the largest VM type of Azure and if there is no chance to compensate with more memory, the workload
of the SAP system can be impacted severely. In such cases, you can hit a point where you should not deploy the
system on Azure.
Step 5: Especially in SAP systems, which are deployed on-premises in 2-Tier configurations, the chances are
that the system might need to be configured on Azure in a 3-Tier configuration. In this step, you need to check
whether there is a component in the SAP application layer, which cant be scaled out and which would not fit
into the CPU and memory resources the different Azure VM types offer. If there indeed is such a component, the
SAP system and its workload cant be deployed into Azure. But if you can scale out the SAP application
components into multiple Azure VMs, the system can be deployed into Azure.
Step 6: If the DBMS and SAP application layer components can be run in Azure VMs, the configuration needs to
be defined with regard to:
Number of Azure VMs
VM types for the individual components
Number of VHDs in DBMS VM to provide enough IOPS

Managing Azure Assets


Azure Portal
The Azure portal is one of three interfaces to manage Azure VM deployments. The basic management tasks, like
deploying VMs from images, can be done through the Azure portal. In addition, the creation of Storage
Accounts, Virtual Networks, and other Azure components are also tasks the Azure portal can handle very well.
However, functionality like uploading VHDs from on-premises to Azure or copying a VHD within Azure are
tasks, which require either third-party tools or administration through PowerShell or CLI.

Administration and configuration tasks for the Virtual Machine instance are possible from within the Azure
portal.
Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data disks for
the Virtual Machine instance, to capture the instance for image preparation and configure the size of the Virtual
Machine instance.
The Azure portal provides basic functionality to deploy and configure VMs and many other Azure services.
However not all available functionality is covered by the Azure portal. In the Azure portal, its not possible to
perform tasks like:
Uploading VHDs to Azure
Copying VMs
Management via Microsoft Azure PowerShell cmdlets
Windows PowerShell is a powerful and extensible framework that has been widely adopted by customers
deploying larger numbers of systems in Azure. After the installation of PowerShell cmdlets on a desktop, laptop
or dedicated management station, the PowerShell cmdlets can be run remotely.
The process to enable a local desktop/laptop for the usage of Azure PowerShell cmdlets and how to configure
those for the usage with the Azure subscription(s) is described in this article.
More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be found in
this chapter of the Deployment Guide.
Customer experience so far has been that PowerShell (PS) is certainly the more powerful tool to deploy VMs
and to create custom steps in the deployment of VMs. All of the customers running SAP instances in Azure are
using PS cmdlets to supplement management tasks they do in the Azure portal or are even using PS cmdlets
exclusively to manage their deployments in Azure. Since the Azure-specific cmdlets share the same naming
convention as the more than 2000 Windows-related cmdlets, it is an easy task for Windows administrators to
leverage those cmdlets.
See example here: http://blogs.technet.com/b/keithmayer/archive/2015/07/07/18-steps-for-end-to-end-iaas-
provisioning-in-the-cloud-with-azure-resource-manager-arm-powershell-and-desired-state-configuration-
dsc.aspx
Deployment of the Azure Monitoring Extension for SAP (see chapter Azure Monitoring Solution for SAP in this
document) is only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell
or CLI when deploying or administering an SAP NetWeaver system in Azure.
As Azure provides more functionality, new PS cmdlets are going to be added that requires an update of the
cmdlets. Therefore it makes sense to check the Azure Download site at least once the month
https://azure.microsoft.com/downloads/ for a new version of the cmdlets. The new version is installed on top of
the older version.
For a general list of Azure-related PowerShell commands check here:
https://docs.microsoft.com/powershell/azure/overview.
Management via Microsoft Azure CLI commands
For customers who use Linux and want to manage Azure resources Powershell might not be an option.
Microsoft offers Azure CLI as an alternative. The Azure CLI provides a set of open source, cross-platform
commands for working with the Azure Platform. The Azure CLI provides much of the same functionality found
in the Azure portal.
For information about installation, configuration and how to use CLI commands to accomplish Azure tasks see
Install the Azure CLI
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI]
[../../linux/create-ssh-secured-vm-from-template.md]
Use the Azure CLI for Mac, Linux, and Windows with Azure Resource Manager
Also read chapter Azure CLI for Linux VMs in the Deployment Guide on how to use Azure CLI to deploy the
Azure Monitoring Extension for SAP.

Different ways to deploy VMs for SAP in Azure


In this chapter, you learn the different ways to deploy a VM in Azure. Additional preparation procedures, as well
as handling of VHDs and VMs in Azure are covered in this chapter.
Deployment of VMs for SAP
Microsoft Azure offers multiple ways to deploy VMs and associated disks. Thus it is very important to
understand the differences since preparations of the VMs might differ depending on the method of
deployment. In general, we take a look at the following scenarios:
Moving a VM from on-premises to Azure with a non-generalized disk
You plan to move a specific SAP system from on-premises to Azure. This can be done by uploading the VHD,
which contains the OS, the SAP Binaries, and DBMS binaries plus the VHDs with the data and log files of the
DBMS to Azure. In contrast to scenario #2 below, you keep the hostname, SAP SID, and SAP user accounts in the
Azure VM as they were configured in the on-premises environment. Therefore, generalizing the image is not
necessary. See chapters Preparation for moving a VM from on-premises to Azure with a non-generalized disk
of this document for on-premises preparation steps and upload of non-generalized VMs or VHDs to Azure.
Read chapter Scenario 3: Moving a VM from on-premises using a non-generalized Azure VHD with SAP in the
Deployment Guide for detailed steps of deploying such an image in Azure.
Deploying a VM with a customer-specific image
Due to specific patch requirements of your OS or DBMS version, the provided images in the Azure Marketplace
might not fit your needs. Therefore, you might need to create a VM using your own private OS/DBMS VM
image, which can be deployed several times afterwards. To prepare such a private image for duplication, the
following items have to be considered:

Windows
See more details here: https://docs.microsoft.com/azure/virtual-machines/windows/upload-generalized-
managed The Windows settings (like Windows SID and hostname) must be abstracted/generalized on the
on-premises VM via the sysprep command.

Linux
Follow the steps described in these articles for SUSE, Red Hat, or Oracle Linux, to prepare a VHD to be
uploaded to Azure.

If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you can adapt
the SAP system settings after the deployment of the Azure VM through the instance rename procedure
supported by the SAP Software Provisioning Manager (SAP Note 1619720). See chapters Preparation for
deploying a VM with a customer-specific image for SAP and Uploading a VHD from on-premises to Azure of
this document for on-premises preparation steps and upload of a generalized VM to Azure. Read chapter
Scenario 2: Deploying a VM with a custom image for SAP in the Deployment Guide for detailed steps of
deploying such an image in Azure.
Deploying a VM out of the Azure Marketplace
You would like to use a Microsoft or third-party provided VM image from the Azure Marketplace to deploy your
VM. After you deployed your VM in Azure, you follow the same guidelines and tools to install the SAP software
and/or DBMS inside your VM as you would do in an on-premises environment. For more detailed deployment
description, please see chapter Scenario 1: Deploying a VM out of the Azure Marketplace for SAP in the
Deployment Guide.
Preparing VMs with SAP for Azure
Before uploading VMs into Azure you need to make sure the VMs and VHDs fulfill certain requirements. There
are small differences depending on the deployment method that is used.
Preparation for moving a VM from on-premises to Azure with a non-generalized disk
A common deployment method is to move an existing VM which runs an SAP system from on-premises to
Azure. That VM and the SAP system in the VM just should run in Azure using the same hostname and very likely
the same SAP SID. In this case the guest OS of VM should not be generalized for multiple deployments. If the
on-premises network got extended into Azure (see chapter cross-premises - Deployment of single or multiple
SAP VMs into Azure with the requirement of being fully integrated into the on-premises network in this
document), then even the same domain accounts can be used within the VM as those were used before on-
premises.
Requirements when preparing your own Azure VM Disk are:
Originally the VHD containing the operating system could have a maximum size of 127GB only. This
limitation got eliminated at the end of March 2015. Now the VHD containing the operating system can be up
to 1TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet supported on
Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell
commandlets or CLI
VHDs which are mounted to the VM and should be mounted again in Azure to the VM need to be in a fixed
VHD format as well. Please read this article (Linux) and this article (Windows) for size limits of data disks.
Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or
CLI
Add another local account with administrator privileges which can be used by Microsoft support or which
can be assigned as context for services and applications to run in until the VM is deployed and more
appropriate users can be used.
For the case of using a Cloud-Only deployment scenario (see chapter Cloud-Only - Virtual Machine
deployments into Azure without dependencies on the on-premises customer network of this document) in
combination with this deployment method, domain accounts might not work once the Azure Disk is
deployed in Azure. This is especially true for accounts which are used to run services like the DBMS or SAP
applications. Therefore you need to replace such domain accounts with VM local accounts and delete the on-
premises domain accounts in the VM. Keeping on-premises domain users in the VM image is not an issue
when the VM is deployed in the cross-premises scenario as described in chapter Cross-Premises -
Deployment of single or multiple SAP VMs into Azure with the requirement of being fully integrated into the
on-premises network in this document.
If domain accounts were used as DBMS logins or users when running the system on-premises and those
VMs are supposed to be deployed in Cloud-Only scenarios, the domain users need to be deleted. You need
to make sure that the local administrator plus another VM local user is added as a login/user into the DBMS
as administrators.
Add other local accounts as those might be needed for the specific deployment scenario.

Windows
In this scenario no generalization (sysprep) of the VM is required to upload and deploy the VM on Azure.
Make sure that drive D:\ is not used. Set disk automount for attached disks as described in chapter Setting
automount for attached disks in this document.

Linux
In this scenario no generalization (waagent -deprovision) of the VM is required to upload and deploy the
VM on Azure. Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the
OS disk, make sure that the bootloader entry also reflects the uuid-based mount.

Preparation for deploying a VM with a customer-specific image for SAP


VHD files that contain a generalized OS are stored in containers on Azure Storage Accounts or as Managed Disk
images. You can deploy a new VM from such an image by referencing the VHD or Managed Disk image as a
source in your deployment template files as described in chapter Scenario 2: Deploying a VM with a custom
image for SAP of the Deployment Guide.
Requirements when preparing your own Azure VM Image are:
Originally the VHD containing the operating system could have a maximum size of 127GB only. This
limitation got eliminated at the end of March 2015. Now the VHD containing the operating system can be up
to 1TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet supported on
Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell
commandlets or CLI
VHDs which are mounted to the VM and should be mounted again in Azure to the VM need to be in a fixed
VHD format as well. Please read this article (Linux) and this article (Windows) for size limits of data disks.
Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or
CLI
Since all the Domain users registered as users in the VM will not exist in a Cloud-Only scenario (see chapter
Cloud-Only - Virtual Machine deployments into Azure without dependencies on the on-premises customer
network of this document), services using such domain accounts might not work once the Image is deployed
in Azure. This is especially true for accounts which are used to run services like DBMS or SAP applications.
Therefore you need to replace such domain accounts with VM local accounts and delete the on-premises
domain accounts in the VM. Keeping on-premises domain users in the VM image might not be an issue
when the VM is deployed in the cross-premises scenario as described in chapter Cross-Premises -
Deployment of single or multiple SAP VMs into Azure with the requirement of being fully integrated into the
on-premises network in this document.
Add another local account with administrator privileges which can be used by Microsoft support in problem
investigations or which can be assigned as context for services and applications to run in until the VM is
deployed and more appropriate users can be used.
In Cloud-Only deployments and where domain accounts were used as DBMS logins or users when running
the system on-premises, the domain users should be deleted. You need to make sure that the local
administrator plus another VM local user is added as a login/user of the DBMS as administrators.
Add other local accounts as those might be needed for the specific deployment scenario.
If the image contains an installation of SAP NetWeaver and renaming of the host name from the original
name at the point of the Azure deployment is likely, it is recommended to copy the latest versions of the SAP
Software Provisioning Manager DVD into the template. This will enable you to easily use the SAP provided
rename functionality to adapt the changed hostname and/or change the SID of the SAP system within the
deployed VM image as soon as a new copy is started.

Windows
Make sure that drive D:\ is not used Set disk automount for attached disks as described in chapter Setting
automount for attached disks in this document.

Linux
Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS disk, make
sure the bootloader entry also reflects the uuid-based mount.

SAP GUI (for administrative and setup purposes) can be pre-installed in such a template.
Other software necessary to run the VMs successfully in cross-premises scenarios can be installed as long as
this software can work with the rename of the VM.
If the VM is prepared sufficiently to be generic and eventually independent of accounts/users not available in
the targeted Azure deployment scenario, the last preparation step of generalizing such an image is conducted.
Gen er al i z i n g a VM

Windows
The last step is to log in to a VM with an Administrator account. Open a Windows command window as
administrator. Go to %windir%\windows\system32\sysprep and execute sysprep.exe. A small window will
appear. It is important to check the Generalize option (the default is un-checked) and change the Shutdown
Option from its default of Reboot to Shutdown. This procedure assumes that the sysprep process is
executed on-premises in the Guest OS of a VM. If you want to perform the procedure with a VM already
running in Azure, follow the steps described in this article.
Linux
How to capture a Linux virtual machine to use as a Resource Manager template

Transferring VMs and VHDs between on-premises to Azure


Since uploading VM images and disks to Azure is not possible via the Azure portal, you need to use Azure
PowerShell cmdlets or CLI. Another possibility is the use of the tool AzCopy. The tool can copy VHDs between
on-premises and Azure (in both directions). It also can copy VHDs between Azure Regions. Please consult this
documentation for download and usage of AzCopy.
A third alternative would be to use various third-party GUI-oriented tools. However, please make sure that these
tools are supporting Azure Page Blobs. For our purposes we need to use Azure Page Blob store (the differences
are described here: https://docs.microsoft.com/rest/api/storageservices/Understanding-Block-Blobs--Append-
Blobs--and-Page-Blobs). Also the tools provided by Azure are very efficient in compressing the VMs and VHDs
which need to be uploaded. This is important because this efficiency in compression reduces the upload time
(which varies anyway depending on the upload link to the internet from the on-premises facility and the Azure
deployment region targeted). It is a fair assumption that uploading a VM or VHD from European location to the
U.S.-based Azure data centers will take longer than uploading the same VMs/VHDs to the European Azure data
centers.
Uploading a VHD from on-premises to Azure
To upload an existing VM or VHD from the on-premises network such a VM or VHD needs to meet the
requirements as listed in chapter Preparation for moving a VM from on-premises to Azure with a non-
generalized disk of this document.
Such a VM does NOT need to be generalized and can be uploaded in the state and shape it has after shutdown
on the on-premises side. The same is true for additional VHDs which dont contain any operating system.
U p l o a d i n g a V H D a n d m a k i n g i t a n A z u r e D i sk

In this case we want to upload a VHD, either with or without an OS in it, and mount it to a VM as a data disk or
use it as OS disk. This is a multi-step process
Powershell
Log in to your subscription with Login-AzureRmAccount
Set the subscription of your context with Set-AzureRmContext and parameter SubscriptionId or
SubscriptionName - see https://docs.microsoft.com/powershell/module/azurerm.profile/set-azurermcontext
Upload the VHD with Add-AzureRmVhd to an Azure Storage Account - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvhd
(Optional) Create a Managed Disk from the VHD with New-AzureRmDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermdisk
Set the OS disk of a new VM config to the VHD or Managed Disk with Set-AzureRmVMOSDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmosdisk
Create a new VM from the VM config with New-AzureRmVM - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermvm
Add a data disk to a new VM with Add-AzureRmVMDataDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvmdatadisk
Azure CLI 2.0
Log in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk from the VHD with az disk create - see
https://docs.microsoft.com/cli/azure/disk#az_disk_create
Create a new VM specifying the uploaded VHD or Managed Disk as OS disk with az vm create and
parameter --attach-os-disk
Add a data disk to a new VM with az vm disk attach and parameter --new
Template
Upload the VHD with Powershell or Azure CLI
(Optional) Create a Managed Disk from the VHD with Powershell, Azure CLI or the Azure portal
Deploy the VM with a JSON template referencing the VHD as shown in this example JSON template or using
Managed Disks as shown in this example JSON template.
Deployment of a VM Image
To upload an existing VM or VHD from the on-premises network in order to use it as an Azure VM image such a
VM or VHD need to meet the requirements listed in chapter Preparation for deploying a VM with a customer-
specific image for SAP of this document.
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Log in to your subscription with Login-AzureRmAccount
Set the subscription of your context with Set-AzureRmContext and parameter SubscriptionId or
SubscriptionName - see https://docs.microsoft.com/powershell/module/azurerm.profile/set-azurermcontext
Upload the VHD with Add-AzureRmVhd to an Azure Storage Account - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvhd
(Optional) Create a Managed Disk Image from the VHD with New-AzureRmImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermimage
Set the OS disk of a new VM config to the
VHD with Set-AzureRmVMOSDisk -SourceImageUri -CreateOption fromImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmosdisk
Managed Disk Image Set-AzureRmVMSourceImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmsourceimage
Create a new VM from the VM config with New-AzureRmVM - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermvm
Azure CLI 2.0
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Log in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk Image from the VHD with az image create - see
https://docs.microsoft.com/cli/azure/image#az_image_create
Create a new VM specifying the uploaded VHD or Managed Disk Image as OS disk with az vm create and
parameter --image
Template
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Upload the VHD with Powershell or Azure CLI
(Optional) Create a Managed Disk Image from the VHD with Powershell, Azure CLI or the Azure portal
Deploy the VM with a JSON template referencing the image VHD as shown in this example JSON template
or using the Managed Disk Image as shown in this example JSON template.
Downloading VHDs or Managed Disks to on-premises
Azure Infrastructure as a Service is not a one-way street of only being able to upload VHDs and SAP systems.
You can move SAP systems from Azure back into the on-premises world as well.
During the time of the download the VHDs or Managed Disks cant be active. Even when downloading disks
which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to download the
database content which then should be used to set up a new system on-premises and if it is acceptable that
during the time of the download and the setup of the new system that the system in Azure can still be
operational, you could avoid a long downtime by performing a compressed database backup into a disk and
just download that disk instead of also downloading the OS base VM.
Powershell
Downloading a Managed Disk
You first need to get access to the underlying blob of the Managed Disk. Then you can copy the
underlying blob to a new storage account and download the blob from this storage account.

$access = Grant-AzureRmDiskAccess -ResourceGroupName <resource group> -DiskName <disk name> -Access


Read -DurationInSecond 3600
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName <resource group> -Name <storage account
name>)[0].Value
$destContext = (New-AzureStorageContext -StorageAccountName <storage account name -StorageAccountKey
$key)
Start-AzureStorageBlobCopy -AbsoluteUri $access.AccessSAS -DestContainer <container name> -DestBlob
<blob name> -DestContext $destContext
# Wait for blob copy to finish
Get-AzureStorageBlobCopyState -Container <container name> -Blob <blob name> -Context $destContext
Save-AzureRmVhd -SourceUri <blob in new storage account> -LocalFilePath <local file path> -
StorageKey $key
# Wait for download to finish
Revoke-AzureRmDiskAccess -ResourceGroupName <resource group> -DiskName <disk name>

Downloading a VHD
Once the SAP system is stopped and the VM is shut down, you can use the PowerShell cmdlet Save-
AzureRmVhd on the on-premises target to download the VHD disks back to the on-premises world. In
order to do that, you need the URL of the VHD which you can find in the Storage Section of the Azure
portal (need to navigate to the Storage Account and the storage container where the VHD was created)
and you need to know where the VHD should be copied to.
Then you can leverage the command by simply defining the parameter SourceUri as the URL of the VHD
to download and the LocalFilePath as the physical location of the VHD (including its name). The
command could look like:

Save-AzureRmVhd -ResourceGroupName <resource group name of storage account> -SourceUri


http://<storage account name>.blob.core.windows.net/<container name>/sapidedata.vhd -LocalFilePath
E:\Azure_downloads\sapidesdata.vhd

For more details of the Save-AzureRmVhd cmdlet, please check here


https://docs.microsoft.com/powershell/module/azurerm.compute/save-azurermvhd.
CLI 2.0
Downloading a Managed Disk
You first need to get access to the underlying blob of the Managed Disk. Then you can copy the
underlying blob to a new storage account and download the blob from this storage account.

az disk grant-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource


group>/providers/Microsoft.Compute/disks/<disk name>" --duration-in-seconds 3600
az storage blob download --sas-token "<sas token>" --account-name <account name> --container-name
<container name> --name <blob name> --file <local file>
az disk revoke-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource
group>/providers/Microsoft.Compute/disks/<disk name>"

Downloading a VHD
Once the SAP system is stopped and the VM is shut down, you can use the Azure CLI command azure
storage blob download on the on-premises target to download the VHD disks back to the on-premises
world. In order to do that, you need the name and the container of the VHD which you can find in the
Storage Section of the Azure portal (need to navigate to the Storage Account and the storage container
where the VHD was created) and you need to know where the VHD should be copied to.
Then you can leverage the command by simply defining the parameters blob and container of the VHD
to download and the destination as the physical target location of the VHD (including its name). The
command could look like:

az storage blob download --name <name of the VHD to download> --container-name <container of the VHD
to download> --account-name <storage account name of the VHD to download> --account-key <storage
account key> --file <destination of the VHD to download>

Transferring VMs and disks within Azure


Copying SAP systems within Azure
An SAP system or even a dedicated DBMS server supporting an SAP application layer will likely consist of
several disks which contain either the OS with the binaries or the data and log file(s) of the SAP database.
Neither the Azure functionality of copying disks nor the Azure functionality of saving disks to a local disk has a
synchronization mechanism, which would snapshot multiple disks synchronously. Therefore, the state of the
copied or saved disks even if those are mounted against the same VM would be different. This means that in the
concrete case of having different data and logfile(s) contained in the different disks, the database in the end
would be inconsistent.
Conclusion: In order to copy or save disks which are part of an SAP system configuration you need to
stop the SAP system and also need to shut down the deployed VM. Only then you can copy or
download the set of disks to either create a copy of the SAP system in Azure or on-premises.
Data disks can be stored as VHD files in an Azure Storage Account and can be directly attached to a virtual
machine or be used as an image. In this case, the VHD is copied to another location before being attached to the
virtual machine. The full name of the VHD file in Azure must be unique within Azure. As mentioned earlier
already, the name is kind of a three-part name that looks like:

http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>

Data disks can also be Managed Disks. In this case, the Managed Disk is used to create a new Managed Disk
before being attached to the virtual machine. The name of the Managed Disk must be unique within a resource
group.
P o w e r sh e l l

You can use Azure PowerShell cmdlets to copy a VHD as shown in this article. To create a new Managed Disk,
use New-AzureRmDiskConfig and New-AzureRmDisk as shown in the following example.
$config = New-AzureRmDiskConfig -CreateOption Copy -SourceUri "/subscriptions/<subscription
id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" -Location <location>
New-AzureRmDisk -ResourceGroupName <resource group name> -DiskName <disk name> -Disk $config

C L I 2 .0

You can use Azure CLI to copy a VHD as shown in this article. To create a new Managed Disk, use az disk create
as shown in the following example.

az disk create --source "/subscriptions/<subscription id>/resourceGroups/<resource


group>/providers/Microsoft.Compute/disks/<disk name>" --name <disk name> --resource-group <resource group
name> --location <location>

A z u r e St o r a g e t o o l s

http://storageexplorer.com/
Professional editions of Azure Storage Explorers can be found here:
http://www.cerebrata.com/
http://clumsyleaf.com/products/cloudxplorer
The copy of a VHD itself within a storage account is a process which takes only a few seconds (similar to SAN
hardware creating snapshots with lazy copy and copy on write). After you have a copy of the VHD file you can
attach it to a virtual machine or use it as an image to attach copies of the VHD to virtual machines.
P o w e r sh e l l

# attach a vhd to a vm
$vm = Get-AzureRmVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzureRmVMDataDisk -VM $vm -Name newdatadisk -VhdUri <path to vhd> -Caching <caching option> -
DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzureRmVM

# attach a managed disk to a vm


$vm = Get-AzureRmVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzureRmVMDataDisk -VM $vm -Name newdatadisk -ManagedDiskId <managed disk id> -Caching <caching
option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzureRmVM

# attach a copy of the vhd to a vm


$vm = Get-AzureRmVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzureRmVMDataDisk -VM $vm -Name <disk name> -VhdUri <new path of vhd> -SourceImageUri <path to
image vhd> -Caching <caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption fromImage
$vm | Update-AzureRmVM

# attach a copy of the managed disk to a vm


$vm = Get-AzureRmVM -ResourceGroupName <resource group name> -Name <vm name>
$diskConfig = New-AzureRmDiskConfig -Location $vm.Location -CreateOption Copy -SourceUri <source managed
disk id>
$disk = New-AzureRmDisk -DiskName <disk name> -Disk $diskConfig -ResourceGroupName <resource group name>
$vm = Add-AzureRmVMDataDisk -VM $vm -Caching <caching option> -Lun <lun, for example 0> -CreateOption
attach -ManagedDiskId $disk.Id
$vm | Update-AzureRmVM

C L I 2 .0
# attach a vhd to a vm
az vm unmanaged-disk attach --resource-group <resource group name> --vm-name <vm name> --vhd-uri <path to
vhd>

# attach a managed disk to a vm


az vm disk attach --resource-group <resource group name> --vm-name <vm name> --disk <managed disk id>

# attach a copy of the vhd to a vm


# this scenario is currently not possible with Azure CLI. A workaround is to manually copy the vhd to the
destination.

# attach a copy of a managed disk to a vm


az disk create --name <new disk name> --resource-group <resource group name> --location <location of target
virtual machine> --source <source managed disk id>
az vm disk attach --disk <new disk name or managed disk id> --resource-group <resource group name> --vm-
name <vm name> --caching <caching option> --lun <lun, for example 0>

Copying disks between Azure Storage Accounts


This task cannot be performed on the Azure portal. You can use Azure PowerShell cmdlets, Azure CLI or a third-
party storage browser. The PowerShell cmdlets or CLI commands can create and manage blobs, which include
the ability to asynchronously copy blobs across Storage Accounts and across regions within the Azure
subscription.
P o w e r sh e l l

You can also copy VHDs between subscriptions. For more information read this article.
The basic flow of the PS cmdlet logic looks like this:
Create a storage account context for the source storage account with New-AzureStorageContext - see
https://msdn.microsoft.com/library/dn806380.aspx
Create a storage account context for the target storage account with New-AzureStorageContext - see
https://msdn.microsoft.com/library/dn806380.aspx
Start the copy with

Start-AzureStorageBlobCopy -SrcBlob <source blob name> -SrcContainer <source container name> -SrcContext
<variable containing context of source storage account> -DestBlob <target blob name> -DestContainer <target
container name> -DestContext <variable containing context of target storage account>

Check the status of the copy in a loop with

Get-AzureStorageBlobCopyState -Blob <target blob name> -Container <target container name> -Context
<variable containing context of target storage account>

Attach the new VHD to a virtual machine as described above.


For examples see this article.
C L I 2 .0

Start the copy with

az storage blob copy start --source-blob <source blob name> --source-container <source container name> --
source-account-name <source storage account name> --source-account-key <source storage account key> --
destination-container <target container name> --destination-blob <target blob name> --account-name <target
storage account name> --account-key <target storage account name>

Check the status if the copy in a loop with


az storage blob show --name <target blob name> --container <target container name> --account-name <target
storage account name> --account-key <target storage account name>

Attach the new VHD to a virtual machine as described above.


For examples see this article.
Disk Handling
VM/disk structure for SAP deployments
Ideally the handling of the structure of a VM and the associated disks should be very simple. In on-premises
installations, customers developed many ways of structuring a server installation.
One base disk which contains the OS and all the binaries of the DBMS and/or SAP. Since March 2015, this
disk can be up to 1TB in size instead of earlier restrictions that limited it to 127GB.
One or multiple disks which contains the DBMS log file of the SAP database and the log file of the DBMS
temp storage area (if the DBMS supports this). If the database log IOPS requirements are high, you need to
stripe multiple disks in order to reach the IOPS volume required.
A number of disks containing one or two database files of the SAP database and the DBMS temp data files as
well (if the DBMS supports this).

Windows
With many customers we saw configurations where, for example, SAP and DBMS binaries were not installed
on the c:\ drive where the OS was installed. There were various reasons for this, but when we went back to
the root, it usually was that the drives were small and OS upgrades needed additional space 10-15 years
ago. Both conditions do not apply these days too often anymore. Today the c:\ drive can be mapped on
large volume disks or VMs. In order to keep deployments simple in their structure, it is recommended to
follow the following deployment pattern for SAP NetWeaver systems in Azure
The Windows operating system pagefile should be on the D: drive (non-persistent disk)

Linux
Place the Linux swapfile under /mnt /mnt/resource on Linux as described in this article. The swap file can be
configured in the configuration file of the Linux Agent /etc/waagent.conf. Add or change the following
settings:

ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=30720

To activate the changes, you need to restart the Linux Agent with

sudo service waagent restart

Please read SAP Note 1597355 for more details on the recommended swap file size

The number of disks used for the DBMS data files and the type of Azure Storage these disks are hosted on
should be determined by the IOPS requirements and the latency required. Exact quotas are described in this
article (Linux) and this article (Windows).
Experience of SAP deployments over the last 2 years taught us some lessons which can be summarized as:
IOPS traffic to different data files is not always the same since existing customer systems might have
differently sized data files representing their SAP database(s). As a result it turned out to be better using a
RAID configuration over multiple disks to place the data files LUNs carved out of those. There were
situations, especially with Azure Standard Storage where an IOPS rate hit the quota of a single disk against
the DBMS transaction log. In such scenarios the use of Premium Storage is recommended or alternatively
aggregating multiple Standard Storage disks with a software RAID.

Windows
Performance best practices for SQL Server in Azure Virtual Machines

Linux
Configure Software RAID on Linux
Configure LVM on a Linux VM in Azure
Azure Storage secrets and Linux I/O optimizations

Premium Storage is showing significant better performance, especially for critical transaction log writes. For
SAP scenarios that are expected to deliver production like performance, it is highly recommended to use
VM-Series that can leverage Azure Premium Storage.
Keep in mind that the disk which contains the OS, and as we recommend, the binaries of SAP and the database
(base VM) as well, is not anymore limited to 127GB. It now can have up to 1TB in size. This should be enough
space to keep all the necessary file including, for example, SAP batch job logs.
For more suggestions and more details, specifically for DBMS VMs, please consult the DBMS Deployment Guide
Disk Handling
In most scenarios you need to create additional disks in order to deploy the SAP database into the VM. We
talked about the considerations on number of disks in chapter VM/disk structure for SAP deployments of this
document. The Azure portal allows to attach and detach disks once a base VM is deployed. The disks can be
attached/detached when the VM is up and running as well as when it is stopped. When attaching a disk, the
Azure portal offers to attach an empty disk or an existing disk which at this point in time is not attached to
another VM.
Note: Disks can only be attached to one VM at any given time.

During the deployment of a new virtual machine, you can decide whether you want to use Managed Disks or
place your disks on Azure Storage Accounts. If you want to use Premium Storage, we recommend using
Managed Disks.
Next, you need to decide whether you want to create a new and empty disk or whether you want to select an
existing disk that was uploaded earlier and should be attached to the VM now.
IMPORTANT: You DO NOT want to use Host Caching with Azure Standard Storage. You should leave the Host
Cache preference at the default of NONE. With Azure Premium Storage you should enable Read Caching if the
I/O characteristic is mostly read like typical I/O traffic against database data files. In case of database transaction
log file no caching is recommended.

Windows
How to attach a data disk in the Azure portal
If disks are attached, you need to log in to the VM to open the Windows Disk Manager. If automount is not
enabled as recommended in chapter Setting automount for attached disks, the newly attached volume
needs to be taken online and initialized.

Linux
If disks are attached, you need to log in to the VM and initialize the disks as described in this article

If the new disk is an empty disk, you need to format the disk as well. For formatting, especially for DBMS data
and log files the same recommendations as for bare-metal deployments of the DBMS apply.
As already mentioned in chapter The Microsoft Azure Virtual Machine Concept, an Azure Storage account does
not provide infinite resources in terms of I/O volume, IOPS and data volume. Usually DBMS VMs are most
affected by this. It might be best to use a separate Storage Account for each VM if you have few high I/O
volume VMs to deploy in order to stay within the limit of the Azure Storage Account volume. Otherwise, you
need to see how you can balance these VMs between different Storage accounts without hitting the limit of
each single Storage Account. More details are discussed in the DBMS Deployment Guide. You should also keep
these limitations in mind for pure SAP application server VMs or other VMs which eventually might require
additional VHDs. These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage,
we recommend using Managed Disk.
Another topic which is relevant for Storage Accounts is whether the VHDs in a Storage Account are getting Geo-
replicated. Geo-replication is enabled or disabled on the Storage Account level and not on the VM level. If geo-
replication is enabled, the VHDs within the Storage Account would be replicated into another Azure data center
within the same region. Before deciding on this, you should think about the following restriction:
Azure Geo-replication works locally on each VHD in a VM and does not replicate the IOs in chronological order
across multiple VHDs in a VM. Therefore, the VHD that represents the base VM as well as any additional VHDs
attached to the VM are replicated independent of each other. This means there is no synchronization between
the changes in the different VHDs. The fact that the IOs are replicated independently of the order in which they
are written means that geo-replication is not of value for database servers that have their databases distributed
over multiple VHDs. In addition to the DBMS, there also might be other applications where processes write or
manipulate data in different VHDs and where it is important to keep the order of changes. If that is a
requirement, geo-replication in Azure should not be enabled. Dependent on whether you need or want geo-
replication for a set of VMs, but not for another set, you can already categorize VMs and their related VHDs into
different Storage Accounts that have geo-replication enabled or disabled.
Setting automount for attached disks

Windows
For VMs which are created from own Images or Disks, it is necessary to check and possibly set the
automount parameter. Setting this parameter will allow the VM after a restart or redeployment in Azure to
mount the attached/mounted drives again automatically. The parameter is set for the images provided by
Microsoft in the Azure Marketplace.
In order to set the automount, please check the documentation of the command-line executable diskpart.exe
here:
DiskPart Command-Line Options
Automount
The Windows command-line window should be opened as administrator.
If disks are attached, you need to log in to the VM to open the Windows Disk Manager. If automount is not
enabled as recommended in chapter Setting automount for attached disks, the newly attached volume
>needs to be taken online and initialized.

Linux
You need to initialize a newly attached empty disk as described in this article. You also need to add new
disks to the /etc/fstab.

Final Deployment
For the final deployment and exact steps, especially with regards to the deployment of SAP Extended
Monitoring, please refer to the Deployment Guide.

Accessing SAP systems running within Azure VMs


For Cloud-Only scenarios, you might want to connect to those SAP systems across the public internet using SAP
GUI. For these cases, the following procedures need to be applied.
Later in the document we will discuss the other major scenario, connecting to SAP systems in cross-premises
deployments which have a site-to-site connection (VPN tunnel) or Azure ExpressRoute connection between the
on-premises systems and Azure systems.
Remote Access to SAP systems
With Azure Resource Manager there are no default endpoints anymore like in the former classic model. All
ports of an Azure ARM VM are open as long as:
1. No Network Security Group is defined for the subnet or the network interface. Network traffic to Azure VMs
can be secured via so-called "Network Security Groups". For more information see What is a Network
Security Group (NSG)?
2. No Azure Load Balancer is defined for the network interface
See the architecture difference between classic model and ARM as described in this article.
Configuration of the SAP System and SAP GUI connectivity for Cloud-Only scenario
Please see this article which describes details to this topic:
http://blogs.msdn.com/b/saponsqlserver/archive/2014/06/24/sap-gui-connection-closed-when-connecting-to-
sap-system-in-azure.aspx
Changing Firewall Settings within VM
It might be necessary to configure the firewall on your virtual machines to allow inbound traffic to your SAP
system.

Windows
By default, the Windows Firewall within an Azure deployed VM is turned on. You now need to allow the SAP
Port to be opened, otherwise the SAP GUI will not be able to connect. To do this:
Open Control Panel\System and Security\Windows Firewall to Advanced Settings.
Now right-click on Inbound Rules and chose New Rule.
In the following Wizard chose to create a new Port rule.
In the next step of the wizard, leave the setting at TCP and type in the port number you want to open.
Since our SAP instance ID is 00, we took 3200. If your instance has a different instance number, the port
you defined earlier based on the instance number should be opened.
In the next part of the wizard, you need to leave the item Allow Connection checked.
In the next step of the wizard you need to define whether the rule applies for Domain, Private and Public
network. Please adjust it if necessary to your needs. However, connecting with SAP GUI from the outside
through the public network, you need to have the rule applied to the public network.
In the last step of the wizard, name the rule and save by pressing Finish
The rule becomes effective immediately.

Linux
The Linux images in the Azure Marketplace do not enable the iptables firewall by default and the connection
to your SAP system should work. If you enabled iptables or another firewall, please refer to the
documentation of iptables or the used firewall to allow inbound tcp traffic to port 32xx (where xx is the
system number of your SAP system).

Security recommendations
The SAP GUI does not connect immediately to any of the SAP instances (port 32xx) which are running, but first
connects via the port opened to the SAP message server process (port 36xx). In the past the very same port was
used by the message server for the internal communication to the application instances. To prevent on-
premises application servers from inadvertently communicating with a message server in Azure the internal
communication ports can be changed. It is highly recommended to change the internal communication
between the SAP message server and its application instances to a different port number on systems that have
been cloned from on-premises systems, such as a clone of development for project testing etc. This can be done
with the default profile parameter:

rdisp/msserv_internal

as documented in Security Settings for the SAP Message Server

Concepts of Cloud-Only deployment of SAP instances


Single VM with SAP NetWeaver demo/training scenario

In this scenario (see chapter Cloud-Only of this document) we are implementing a typical training/demo system
scenario where the complete training/demo scenario is contained in a single VM. We assume that the
deployment is done through VM image templates. We also assume that multiple of these demo/trainings VMs
need to be deployed with the VMs having the same name.
The assumption is that you created a VM Image as described in some sections of chapter Preparing VMs with
SAP for Azure in this document.
The sequence of events to implement the scenario looks like this:
P o w e r sh e l l

Create a new resource group for every training/demo landscape


$rgName = "SAPERPDemo1"
New-AzureRmResourceGroup -Name $rgName -Location "North Europe"

Create a new storage account if you don't want to use Managed Disks

$suffix = Get-Random -Minimum 100000 -Maximum 999999


$account = New-AzureRmStorageAccount -ResourceGroupName $rgName -Name "saperpdemo$suffix" -SkuName
Standard_LRS -Kind "Storage" -Location "North Europe"

Create a new virtual network for every training/demo landscape to enable the usage of the same hostname
and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to
port 3389 to enable Remote Desktop access and port 22 for SSH.

# Create a new Virtual Network


$rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name SAPERPDemoNSGRDP -Protocol * -SourcePortRange * -
DestinationPortRange 3389 -Access Allow -Direction Inbound -SourceAddressPrefix * -DestinationAddressPrefix
* -Priority 100
$sshRule = New-AzureRmNetworkSecurityRuleConfig -Name SAPERPDemoNSGSSH -Protocol * -SourcePortRange * -
DestinationPortRange 22 -Access Allow -Direction Inbound -SourceAddressPrefix * -DestinationAddressPrefix *
-Priority 101
$nsg = New-AzureRmNetworkSecurityGroup -Name SAPERPDemoNSG -ResourceGroupName $rgName -Location "North
Europe" -SecurityRules $rdpRule,$sshRule

$subnetConfig = New-AzureRmVirtualNetworkSubnetConfig -Name Subnet1 -AddressPrefix 10.0.1.0/24 -


NetworkSecurityGroup $nsg
$vnet = New-AzureRmVirtualNetwork -Name SAPERPDemoVNet -ResourceGroupName $rgName -Location "North Europe"
-AddressPrefix 10.0.1.0/24 -Subnet $subnetConfig

Create a new public IP address that can be used to access the virtual machine from the internet

# Create a public IP address with a DNS name


$pip = New-AzureRmPublicIpAddress -Name SAPERPDemoPIP -ResourceGroupName $rgName -Location "North Europe" -
DomainNameLabel $rgName.ToLower() -AllocationMethod Dynamic

Create a new network interface for the virtual machine

# Create a new Network Interface


$nic = New-AzureRmNetworkInterface -Name SAPERPDemoNIC -ResourceGroupName $rgName -Location "North Europe"
-Subnet $vnet.Subnets[0] -PublicIpAddress $pip

Create a virtual machine. For the Cloud-Only scenario every VM will have the same name. The SAP SID of
the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the
name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same
name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new
administrator user name needs to be defined together with a password. The size of the VM also needs to be
defined.
#####
# Create a new virtual machine with an official image from the Azure Marketplace
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11

# select image
$vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "MicrosoftWindowsServer" -Offer
"WindowsServer" -Skus "2012-R2-Datacenter" -Version "latest"
$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred -ProvisionVMAgent -EnableAutoUpdate
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "SUSE" -Offer "SLES-SAP" -Skus "12-SP1"
-Version "latest"
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "RedHat" -Offer "RHEL" -Skus "7.2" -
Version "latest"
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "Oracle" -Offer "Oracle-Linux" -Skus
"7.2" -Version "latest"
# $vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred

$vmconfig = Add-AzureRmVMNetworkInterface -VM $vmconfig -Id $nic.Id

$vmconfig = Set-AzureRmVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzureRmVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig

#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11

$vmconfig = Add-AzureRmVMNetworkInterface -VM $vmconfig -Id $nic.Id

$diskName="osfromimage"
$osDiskUri=$account.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"

$vmconfig = Set-AzureRmVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -
SourceImageUri <path to VHD that contains the OS image> -Windows
$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred
#$vmconfig = Set-AzureRmVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -
SourceImageUri <path to VHD that contains the OS image> -Linux
#$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred

$vmconfig = Set-AzureRmVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzureRmVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig

#####
# Create a new virtual machine with a Managed Disk Image
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11

$vmconfig = Add-AzureRmVMNetworkInterface -VM $vmconfig -Id $nic.Id

$vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -Id <Id of Managed Disk Image>


$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred
#$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred

$vmconfig = Set-AzureRmVMBootDiagnostics -Disable -VM $vmconfig


$vm = New-AzureRmVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig
Optionally add additional disks and restore necessary content. Be aware that all blob names (URLs to the
blobs) must be unique within Azure.

# Optional: Attach additional VHD data disks


$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name SAPERPDemo
$dataDiskUri = $account.PrimaryEndpoints.Blob.ToString() + "vhds/datadisk.vhd"
Add-AzureRmVMDataDisk -VM $vm -Name datadisk -VhdUri $dataDiskUri -DiskSizeInGB 1023 -CreateOption empty |
Update-AzureRmVM

# Optional: Attach additional Managed Disks


$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name SAPERPDemo
Add-AzureRmVMDataDisk -VM $vm -Name datadisk -DiskSizeInGB 1023 -CreateOption empty -Lun 0 | Update-
AzureRmVM

CLI

The following example code can be used on Linux. For Windows, please either use PowerShell as described
above or adapt the example to use %rgName% instead of $rgName and set the environment variable using the
Windows command set.
Create a new resource group for every training/demo landscape

rgName=SAPERPDemo1
rgNameLower=saperpdemo1
az group create --name $rgName --location "North Europe"

Create a new storage account

az storage account create --resource-group $rgName --location "North Europe" --kind Storage --sku
Standard_LRS --name $rgNameLower

Create a new virtual network for every training/demo landscape to enable the usage of the same hostname
and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to
port 3389 to enable Remote Desktop access and port 22 for SSH.

az network nsg create --resource-group $rgName --location "North Europe" --name SAPERPDemoNSG
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGRDP --
protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --
destination-port-range 3389 --access Allow --priority 100 --direction Inbound
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGSSH --
protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --
destination-port-range 22 --access Allow --priority 101 --direction Inbound

az network vnet create --resource-group $rgName --name SAPERPDemoVNet --location "North Europe" --address-
prefixes 10.0.1.0/24
az network vnet subnet create --resource-group $rgName --vnet-name SAPERPDemoVNet --name Subnet1 --address-
prefix 10.0.1.0/24 --network-security-group SAPERPDemoNSG

Create a new public IP address that can be used to access the virtual machine from the internet

az network public-ip create --resource-group $rgName --name SAPERPDemoPIP --location "North Europe" --dns-
name $rgNameLower --allocation-method Dynamic

Create a new network interface for the virtual machine


az network nic create --resource-group $rgName --location "North Europe" --name SAPERPDemoNIC --public-ip-
address SAPERPDemoPIP --subnet Subnet1 --vnet-name SAPERPDemoVNet

Create a virtual machine. For the Cloud-Only scenario every VM will have the same name. The SAP SID of
the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the
name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same
name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new
administrator user name needs to be defined together with a password. The size of the VM also needs to be
defined.

#####
# Create virtual machines using storage accounts
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-
password <password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-
name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size Standard_D11 --
use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --
authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-
name os --authentication-type password

#####
# Create virtual machines using Managed Disks
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-
password <password> --size Standard_DS11_v2 --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
os-type Windows --admin-username <username> --admin-password <password> --size Standard_D11 --use-
unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path
to image vhd>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
os-type Linux --admin-username <username> --admin-password <password> --size Standard_D11 --use-unmanaged-
disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path to image
vhd> --authentication-type password

#####
# Create a new virtual machine with a Managed Disk Image
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
admin-username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os --image
<managed disk image id>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
admin-username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os --image
<managed disk image id> --authentication-type password

Optionally add additional disks and restore necessary content. Be aware that all blob names (URLs to the
blobs) must be unique within Azure.

# Optional: Attach additional VHD data disks


az vm unmanaged-disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --vhd-uri
https://$rgNameLower.blob.core.windows.net/vhds/data.vhd --new

# Optional: Attach additional Managed Disks


az vm disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --disk datadisk --new

Te m p l a t e

You can use the sample templates on the azure-quickstart-templates repository on github.
Simple Linux VM
Simple Windows VM
VM from image
Implement a set of VMs which need to communicate within Azure
This Cloud-Only scenario is a typical scenario for training and demo purposes where the software representing
the demo/training scenario is spread over multiple VMs. The different components installed in the different
VMs need to communicate with each other. Again, in this scenario no on-premises network communication or
cross-premises scenario is needed.
This scenario is an extension of the installation described in chapter Single VM with SAP NetWeaver
demo/training scenario of this document. In this case more virtual machines will be added to an existing
resource group. In the following example the training landscape consists of an SAP ASCS/SCS VM, a VM
running a DBMS and an SAP Application Server instance VM.
Before you build this scenario you need to think about basic settings as already exercised in the scenario before.
Resource Group and Virtual Machine naming
All resource group names must be unique. Develop your own naming scheme of your resources, such as
<rg-name >-suffix.

The virtual machine name has to be unique within the resource group.
Set up Network for communication between the different VMs
To prevent naming collisions with clones of the same training/demo landscapes, you need to create an Azure
Virtual Network for every landscape. DNS name resolution will be provided by Azure or you can configure your
own DNS server outside Azure (not to be further discussed here). In this scenario we do not configure our own
DNS. For all virtual machines inside one Azure Virtual Network, communication via hostnames will be enabled.
The reasons to separate training or demo landscapes by virtual networks and not only resource groups could
be:
The SAP landscape as set up needs its own AD/OpenLDAP and a Domain Server needs to be part of each of
the landscapes.
The SAP landscape as set up has components that need to work with fixed IP addresses.
More details about Azure Virtual Networks and how to define them can be found in this article.

Deploying SAP VMs with Corporate Network Connectivity (Cross-


Premises)
You run an SAP landscape and want to divide the deployment between bare-metal for high-end DBMS servers,
on-premises virtualized environments for application layers and smaller 2-Tier configured SAP systems and
Azure IaaS. The base assumption is that SAP systems within one SAP landscape need to communicate with each
other and with many other software components deployed in the company, independent of their deployment
form. There also should be no differences introduced by the deployment form for the end user connecting with
SAP GUI or other interfaces. These conditions can only be met when we have the on-premises Active
Directory/OpenLDAP and DNS services extended to the Azure systems through site-to-site/multi-site
connectivity or private connections like Azure ExpressRoute.
In order to get more background on the implementation details of SAP on Azure, we encourage you to read
chapter Concepts of Cloud-Only deployment of SAP instances of this document which explains some of the
basics constructs of Azure and how these should be used with SAP applications in Azure.
Scenario of an SAP landscape
The cross-premises scenario can be roughly described like in the graphics below:
The scenario shown above describes a scenario where the on-premises AD/OpenLDAP and DNS are extended
to Azure. On the on-premises side, a certain IP address range is reserved per Azure subscription. The IP address
range will be assigned to an Azure Virtual Network on the Azure side.
Security considerations
The minimum requirement is the use of secure communication protocols such as SSL/TLS for browser access or
VPN-based connections for system access to the Azure services. The assumption is that companies handle the
VPN connection between their corporate network and Azure very differently. Some companies might blankly
open all the ports. Some other companies might want to be very precise in which ports they need to open, etc.
In the table below typical SAP communication ports are listed. Basically it is sufficient to open the SAP gateway
port.

DEFAULT RANGE (MIN-


SERVICE PORT NAME EXAMPLE <NN > = 01 MAX) COMMENT

Dispatcher sapdp <nn> see * 3201 3200 3299 SAP Dispatcher, used
by SAP GUI for
Windows and Java

Message server sapms <sid > see ** 3600 free sapms <anySID sid = SAP-System-ID
>

Gateway sapgw <nn > see * 3301 free SAP gateway, used
for CPIC and RFC
communication

SAP router sapdp99 3299 free Only CI (central


instance) Service
names can be
reassigned in
/etc/services to an
arbitrary value after
installation.

*) nn = SAP Instance Number


**) sid = SAP-System-ID
More detailed information on ports required for different SAP products or services by SAP products can be
found here http://scn.sap.com/docs/DOC-17124. With this document you should be able to open dedicated
ports in the VPN device necessary for specific SAP products and scenarios.
Other security measures when deploying VMs in such a scenario could be to create a Network Security Group
to define access rules.
Dealing with different Virtual Machine Series
In the course of last 12 months Microsoft added many more VM types that differ either in number of vCPUs,
memory or more important on hardware it is running on. Not all those VMs are supported with SAP (see
supported VM types in SAP Note 1928533). Some of those VMs run on different host hardware generations.
These host hardware generations are getting deployed in the granularity of an Azure Scale-Unit. Means cases
may arise where the different VM sizes you chose cant be run on the same Scale-Unit. An Availability Set is
limited in the ability to span Scale-Units based of different hardware. For example if you want to run the DBMS
on A5-A11 VMs and the SAP application layer on G-Series VMs, you would be forced to deploy a single SAP
system or different SAP systems within different Availability Sets.
Printing on a local network printer from SAP instance in Azure
P r i n t i n g o v e r T C P / I P i n C r o ss- P r e m i se s sc e n a r i o

Setting up your on-premises TCP/IP based network printers in an Azure VM is overall the same as in your
corporate network, assuming you do have a VPN Site-To-Site tunnel or ExpressRoute connection established.

Windows
To do this:
Some network printers come with a configuration wizard which makes it easy to set up your printer in
an Azure VM. If no wizard software has been distributed with the printer the manual way to set up the
printer is to create a new TCP/IP printer port.
Open Control Panel -> Devices and Printers -> Add a printer
Choose Add a printer using a TCP/IP address or hostname
Type in the IP address of the printer
Printer Port standard 9100
If necessary install the appropriate printer driver manually.

Linux
like for Windows just follow the standard procedure to install a network printer
just follow the public Linux guides for SUSE or Red Hat and Oracle Linux on how to add a printer.

H o st - b a se d p r i n t e r o v e r SM B (sh a r e d p r i n t e r ) i n C r o ss- P r e m i se s sc e n a r i o

Host-based printers are not network-compatible by design. But a host-based printer can be shared among
computers on a network as long as the printer is connected to a powered-on computer. Connect your corporate
network either Site-To-Site or ExpressRoute and share your local printer. The SMB protocol uses NetBIOS
instead of DNS as name service. The NetBIOS host name can be different from the DNS host name. The
standard case is that the NetBIOS host name and the DNS host name are identical. The DNS domain does not
make sense in the NetBIOS name space. Accordingly, the fully qualified DNS host name consisting of the DNS
host name and DNS domain must not be used in the NetBIOS name space.
The printer share is identified by a unique name in the network:
Host name of the SMB host (always needed).
Name of the share (always needed).
Name of the domain if printer share is not in the same domain as SAP system.
Additionally, a user name and a password may be required to access the printer share.
How to:

Windows
Share your local printer. In the Azure VM, open the Windows Explorer and type in the share name of the
printer. A printer installation wizard will guide you through the installation process.

Linux
Here are some examples of documentation about configuring network printers in Linux or including a
chapter regarding printing in Linux. It will work the same way in an Azure Linux VM as long as the VM is
part of a VPN:
SLES https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share
RHEL or Oracle Linux https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sec-Printer_Configuration.html#s1-
printing-smb-printer

U SB P r i n t e r (p r i n t e r fo r w a r d i n g )

In Azure the ability of the Remote Desktop Services to provide users the access to their local printer devices in a
remote session is not available.

Windows
More details on printing with Windows can be found here:
http://technet.microsoft.com/library/jj590748.aspx.

Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
The SAP Change and Transport System (TMS) needs to be configured to export and import transport request
across systems in the landscape. We assume that the development instances of an SAP system (DEV) are
located in Azure whereas the quality assurance (QA) and productive systems (PRD) are on-premises.
Furthermore, we assume that there is a central transport directory.
C o n fi g u r i n g t h e T r a n sp o r t D o m a i n

Configure your Transport Domain on the system you designated as the Transport Domain Controller as
described in Configuring the Transport Domain Controller. A system user TMSADM will be created and the
required RFC destination will be generated. You may check these RFC connections using the transaction SM59.
Hostname resolution must be enabled across your transport domain.
How to:
In our scenario we decided the on-premises QAS system will be the CTS domain controller. Call transaction
STMS. The TMS dialog box appears. A Configure Transport Domain dialog box is displayed. (This dialog box
only appears if you have not yet configured a transport domain.)
Make sure that the automatically created user TMSADM is authorized (SM59 -> ABAP Connection ->
TMSADM@E61.DOMAIN_E61 -> Details -> Utilities(M) -> Authorization Test). The initial screen of
transaction STMS should show that this SAP System is now functioning as the controller of the transport
domain as shown here:

Including SAP Systems in the Transport Domain


The sequence of including an SAP system in a transport domain looks as follows:
On the DEV system in Azure, go to the transport system (Client 000) and call transaction STMS. Choose
Other Configuration from the dialog box and continue with Include System in Domain. Specify the Domain
Controller as target host (Including SAP Systems in the Transport Domain). The system is now waiting to be
included in the transport domain.
For security reasons, you then have to go back to the domain controller to confirm your request. Choose
System Overview and Approve of the waiting system. Then confirm the prompt and the configuration will be
distributed.
This SAP system now contains the necessary information about all the other SAP systems in the transport
domain. At the same time, the address data of the new SAP system is sent to all the other SAP systems, and the
SAP system is entered in the transport profile of the transport control program. Check whether RFCs and access
to the transport directory of the domain work.
Continue with the configuration of your transport system as usual as described in the documentation Change
and Transport System.
How to:
Make sure your STMS on premises is configured correctly.
Make sure the hostname of the Transport Domain Controller can be resolved by your virtual machine on
Azure and vice visa.
Call transaction STMS -> Other Configuration -> Include System in Domain.
Confirm the connection in the on premises TMS system.
Configure transport routes, groups and layers as usual.
In site-to-site connected cross-premises scenarios, the latency between on-premises and Azure still can be
substantial. If we follow the sequence of transporting objects through development and test systems to
production or think about applying transports or support packages to the different systems, you realize that,
dependent on the location of the central transport directory, some of the systems will encounter high latency
reading or writing data in the central transport directory. The situation is similar to SAP landscape
configurations where the different systems are spread through different data centers with substantial distance
between the data centers.
In order to work around such latency and have the systems work fast in reading or writing to or from the
transport directory, you can set up two STMS transport domains (one for on-premises and one with the
systems in Azure and link the transport domains. Please check this documentation which explains the principles
behind this concept in the SAP TMS:
http://help.sap.com/saphelp_me60/helpdata/en/c4/6045377b52253de10000009b38f889/content.htm?
frameset=/en/57/38dd924eb711d182bf0000e829fbfe/frameset.htm.
How to:
Set up a transport domain on each location (on-premises and Azure) using transaction STMS
http://help.sap.com/saphelp_nw70ehp3/helpdata/en/44/b4a0b47acc11d1899e0000e829fbbd/content.htm
Link the domains with a domain link and confirm the link between the two domains.
http://help.sap.com/saphelp_nw73ehp1/helpdata/en/a3/139838280c4f18e10000009b38f8cf/content.htm
Distribute the configuration to the linked system.
RFC traffic between SAP instances located in Azure and on-premises (Cross-Premises )
RFC traffic between systems which are on-premises and in Azure needs to work. To set up a connection call
transaction SM59 in a source system where you need to define an RFC connection towards the target system.
The configuration is similar to the standard setup of an RFC Connection.
We assume that in the cross-premises scenario, the VMs which run SAP systems that need to communicate
with each other are in the same domain. Therefore the setup of an RFC connection between SAP systems does
not differ from the setup steps and inputs in on-premises scenarios.
Accessing local fileshares from SAP instances located in Azure or vice versa
SAP instances located in Azure need to access file shares which are within the corporate premises. In addition,
on-premises SAP instances need to access file shares which are located in Azure. To enable the file shares you
must configure the permissions and sharing options on the local system. Make sure to open the ports on the
VPN or ExpressRoute connection between Azure and your datacenter.

Supportability
Azure Monitoring Solution for SAP
In order to enable the monitoring of mission critical SAP systems on Azure the SAP monitoring tools
SAPOSCOL or SAP Host Agent get data off the Azure Virtual Machine Service host via an Azure Monitoring
Extension for SAP. Since the demands by SAP were very specific to SAP applications, Microsoft decided not to
generically implement the required functionality into Azure, but leave it for customers to deploy the necessary
monitoring components and configurations to their Virtual Machines running in Azure. However, deployment
and lifecycle management of the monitoring components will be mostly automated by Azure.
Solution design
The solution developed to enable SAP Monitoring is based on the architecture of Azure VM Agent and Extension
framework. The idea of the Azure VM Agent and Extension framework is to allow installation of software
application(s) available in the Azure VM Extension gallery within a VM. The principle idea behind this concept is
to allow (in cases like the Azure Monitoring Extension for SAP), the deployment of special functionality into a
VM and the configuration of such software at deployment time.
The 'Azure VM Agent' that enables handling of specific Azure VM Extensions within the VM is injected into
Windows VMs by default on VM creation in the Azure portal. In case of SUSE, Red Hat or Oracle Linux, the VM
agent is already part of the Azure Marketplace image. In case one would upload a Linux VM from on-premises
to Azure the VM agent has to be installed manually.
The basic building blocks of the Monitoring solution in Azure for SAP looks like this:

As shown in the block diagram above, one part of the monitoring solution for SAP is hosted in the Azure VM
Image and Azure Extension Gallery which is a globally replicated repository that is managed by Azure
Operations. It is the responsibility of the joint SAP/MS team working on the Azure implementation of SAP to
work with Azure Operations to publish new versions of the Azure Monitoring Extension for SAP.
When you deploy a new Windows VM, the Azure VM Agent is automatically added into the VM. The function
of this agent is to coordinate the loading and configuration of the Azure Extensions for monitoring of SAP
NetWeaver Systems. For Linux VMs the Azure VM Agent is already part of the Azure Marketplace OS image.
However, there is a step that still needs to be executed by the customer. This is the enablement and
configuration of the performance collection. The process related to the configuration is automated by a
PowerShell script or CLI command. The PowerShell script can be downloaded in the Microsoft Azure Script
Center as described in the Deployment Guide.
The overall Architecture of the Azure monitoring solution for SAP looks like:
For the exact how-to and for detailed steps of using these PowerShell cmdlets or CLI command
during deployments, follow the instructions given in the Deployment Guide.
Integration of Azure located SAP instance into SAProuter
SAP instances running in Azure need to be accessible from SAProuter as well.

A SAProuter enables the TCP/IP communication between participating systems if there is no direct IP
connection. This provides the advantage that no end-to-end connection between the communication partners is
necessary on network level. The SAProuter is listening on port 3299 by default. To connect SAP instances
through a SAProuter you need to give the SAProuter string and host name with any attempt to connect.

SAP NetWeaver AS Java


So far the focus of the document has been SAP NetWeaver in general or the SAP NetWeaver ABAP stack. In this
small section, specific considerations for the SAP Java stack are listed. One of the most important SAP
NetWeaver Java exclusively based applications is the SAP Enterprise Portal. Other SAP NetWeaver based
applications like SAP PI and SAP Solution Manager use both the SAP NetWeaver ABAP and Java stacks.
Therefore, there certainly is a need to consider specific aspects related to the SAP NetWeaver Java stack as well.
SAP Enterprise Portal
The setup of an SAP Portal in an Azure Virtual Machine does not differ from an on premises installation if you
are deploying in cross-premises scenarios. Since the DNS is done by on-premises, the port settings of the
individual instances can be done as configured on-premises. The recommendations and restrictions described
in this document so far apply for an application like SAP Enterprise Portal or the SAP NetWeaver Java stack in
general.

A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal to the
Internet while the virtual machine host is connected to the company network via site-to-site VPN tunnel or
ExpressRoute. For such a scenario, you have to make sure that specific ports are open and not blocked by
firewall or network security group. The same mechanics would need to be applied when you want to connect to
an SAP Java instance from on-premises in a Cloud-Only scenario.
The initial portal URI is http(s): <Portalserver >:5XX00/irj where the port is formed by 50000 plus
(Systemnumber 100). The default portal URI of SAP system 00 is <dns name >. <azure region
>.Cloudapp.azure.com:PublicPort/irj. For more details, have a look at
http://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frameset.htm.

If you want to customize the URL and/or ports of your SAP Enterprise Portal, please check this documentation:
Change Portal URL
Change Default port numbers, Portal port numbers

High Availability (HA) and Disaster Recovery (DR) for SAP NetWeaver
running on Azure Virtual Machines
Definition of terminologies
The term high availability (HA) is generally related to a set of technologies that minimizes IT disruptions by
providing business continuity of IT services through redundant, fault-tolerant or failover protected components
inside the same data center. In our case, within one Azure Region.
Disaster recovery (DR) is also targeting minimizing IT services disruption, and their recovery but across
different data centers, that are usually located hundreds of kilometers away. In our case usually between
different Azure Regions within the same geopolitical region or as established by you as a customer.
Overview of High Availability
We can separate the discussion about SAP high availability in Azure into two parts:
Azure infrastructure high availability, for example HA of compute (VMs), network, storage etc. and its
benefits for increasing SAP application availability.
SAP application high availability, for example HA of SAP software components:
SAP application servers
SAP ASCS/SCS instance
DB server
and how it can be combined with Azure infrastructure HA.
SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises
physical or virtual environment. The following paper from SAP describes standard SAP High Availability
configurations in virtualized environments on Windows: http://scn.sap.com/docs/DOC-44415. There is no
sapinst-integrated SAP-HA configuration for Linux like it exists for Windows. Regarding SAP HA on-premises
for Linux find more information here: http://scn.sap.com/docs/DOC-8541.
Azure Infrastructure High Availability
There is currently a single-VM SLA of 99.9%. To get an idea how the availability of a single VM might look like
you can simply build the product of the different available Azure SLAs:
https://azure.microsoft.com/support/legal/sla/.
The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime corresponds
to 21.6 minutes. As usual, the availability of the different services will multiply in the following way:
(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100) *
Like:
(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
Virtual Machine (VM) High Availability
There are two types of Azure platform events that can affect the availability of your virtual machines: planned
maintenance and unplanned maintenance.
Planned maintenance events are periodic updates made by Microsoft to the underlying Azure platform to
improve overall reliability, performance, and security of the platform infrastructure that your virtual
machines run on.
Unplanned maintenance events occur when the hardware or physical infrastructure underlying your virtual
machine has faulted in some way. This may include local network failures, local disk failures, or other rack
level failures. When such a failure is detected, the Azure platform will automatically migrate your virtual
machine from the unhealthy physical server hosting your virtual machine to a healthy physical server. Such
events are rare, but may also cause your virtual machine to reboot.
More details can be found in this documentation: http://azure.microsoft.com/documentation/articles/virtual-
machines-manage-availability
Azure Storage Redundancy
The data in your Microsoft Azure Storage Account is always replicated to ensure durability and high availability,
meeting the Azure Storage SLA even in the face of transient hardware failures.
Since Azure Storage is keeping three images of the data by default, RAID5 or RAID1 across multiple Azure disks
are not necessary.
More details can be found in this article: http://azure.microsoft.com/documentation/articles/storage-
redundancy/
Utilizing Azure Infrastructure VM Restart to Achieve Higher Availability of SAP Applications
If you decide not to use functionalities like Windows Server Failover Clustering (WSFC) or Pacemaker on Linux
(currently only supported for SLES 12 and higher), Azure VM Restart is utilized to protect an SAP System
against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying
Azure platform.

NOTE
It is important to mention that Azure VM Restart primarily protects VMs and NOT applications. VM Restart does not
offer high availability for SAP applications, but it does offer a certain level of infrastructure availability and therefore
indirectly higher availability of SAP systems. There is also no SLA for the time it will take to restart a VM after a planned
or unplanned host outage. Therefore, this method of high availability is not suitable for critical components of an SAP
system like (A)SCS or DBMS.

Another important infrastructure element for high availability is storage. For example Azure Storage SLA is 99,9
% availability. If one deploys all VMs with its disks into a single Azure Storage Account, potential Azure Storage
unavailability will cause unavailability of all VMs that are placed in that Azure Storage Account, and also all SAP
components running inside of those VMs.
Instead of putting all VMs into one single Azure Storage Account, you can also use dedicated storage accounts
for each VM, and in this way increase overall VM and SAP application availability by using multiple independent
Azure Storage Accounts.
Azure Managed Disks are automatically placed in the Fault Domain of the virtual machine they are attached to.
If you place two virtual machines in an availability set and use Managed Disks, the platform will take care of
distributing the Managed Disks into different Fault Domains as well. If you plan to use Premium Storage, we
highly recommend using Manage Disks as well.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and storage accounts
could look like this:

A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and Managed Disks could
look like this:
For critical SAP components we achieved the following so far:
High Availability of SAP Application Servers (AS)
SAP application server instances are redundant components. Each SAP AS instance is deployed on its
own VM, that is running in a different Azure Fault and Upgrade Domain (see chapters Fault Domains and
Upgrade Domains). This is ensured by using Azure Availability Sets (see chapter Azure Availability Sets).
Potential planned or unplanned unavailability of an Azure Fault or Upgrade Domain will cause
unavailability of a restricted number of VMs with their SAP AS instances.
Each SAP AS instance is placed in its own Azure Storage account potential unavailability of one Azure
Storage Account will cause unavailability of only one VM with its SAP AS instance. However, be aware
that there is a limit of Azure Storage Accounts within one Azure subscription. To ensure automatic start
of (A)SCS instance after the VM reboot, make sure to set the Autostart parameter in (A)SCS instance start
profile described in chapter Using Autostart for SAP instances. Please also read chapter High Availability
for SAP Application Servers for more details.
Even if you use Managed Disks, those disks are also stored in an Azure Storage Account and can be
unavailable in an event of a storage outage.
Higher Availability of SAP (A)SCS instance
Here we utilize Azure VM Restart to protect the VM with installed SAP (A)SCS instance. In the case of
planned or unplanned downtime of Azure severs, VMs will be restarted on another available server. As
mentioned earlier, Azure VM Restart primarily protects VMs and NOT applications, in this case the
(A)SCS instance. Through the VM Restart well reach indirectly higher availability of SAP (A)SCS
instance. To insure automatic start of (A)SCS instance after the VM reboot, make sure to set Autostart
parameter in (A)SCS instance start profile described in chapter Using Autostart for SAP instances. This
means the (A)SCS instance as a Single Point of Failure (SPOF) running in a single VM will be the
determinative factor for the availability of the whole SAP landscape.
Higher Availability of DBMS Server
Here, similar to the SAP (A)SCS instance use case, we utilize Azure VM Restart to protect the VM with
installed DBMS software, and we achieve higher availability of DBMS software through VM Restart.
DBMS running in a single VM is also a SPOF, and it is the determinative factor for the availability of the
whole SAP landscape.
SAP Application High Availability on Azure IaaS
To achieve full SAP system high availability, we need to protect all critical SAP system components, for example
redundant SAP application servers, and unique components (for example Single Point of Failure) like SAP
(A)SCS instance and DBMS.
High Availability for SAP Application Servers
For the SAP application servers/dialog instances its not necessary to think about a specific high availability
solution. High availability is simply achieved by redundancy and thereby having enough of them in different
virtual machines. They should all be placed in the same Azure Availability Set to avoid that the VMs might be
updated at the same time during planned maintenance downtime. The basic functionality which builds on
different Upgrade and Fault Domains within an Azure Scale Unit was already introduced in chapter Upgrade
Domains. Azure Availability Sets were presented in chapter Azure Availability Sets of this document.
There is no infinite number of Fault and Upgrade Domains that can be used by an Azure Availability Set within
an Azure Scale Unit. This means that putting a number of VMs into one Availability Set, sooner or later more
than one VM ends up in the same Fault or Upgrade Domain.
Deploying a few SAP application server instances in their dedicated VMs and assuming that we got five
Upgrade Domains, the following picture emerges at the end. The actual max number of fault and update
domains within an availability set might change in the future:

More details can be found in this documentation: http://azure.microsoft.com/documentation/articles/virtual-


machines-manage-availability
High Availability for the SAP (A)SCS instance on Windows
Windows Server Failover Cluster (WSFC) is a frequently used solution to protect the SAP (A)SCS instance. It is
also integrated into sapinst in form of an "HA installation". At this point in time the Azure infrastructure is not
able to provide the functionality to set up the required Windows Server Failover Cluster the same way as it's
done on-premises.
As of January 2016 the Azure cloud platform running the Windows operating system does not provide the
possibility of using a cluster shared volume on a disk shared between two Azure VMs.
A valid solution though is the usage of 3rd-party software which provides a shared volume by synchronous and
transparent disk replication which can be integrated into WSFC. This approach implies that only the active
cluster node is able to access one of the disk copies at a point in time. As of January 2016 this HA configuration
is supported to protect the SAP (A)SCS instance on Windows guest OS on Azure VMs in combination with 3rd-
party software SIOS DataKeeper.
The SIOS DataKeeper solution provides a shared disk cluster resource to Windows Failover Clusters by having:
An additional Azure VHD attached to each of the virtual machines (VMs) that are in a Windows Cluster
configuration
SIOS DataKeeper Cluster Edition running on both VM nodes
Having SIOS DataKeeper Cluster Edition configured in a way that it synchronously mirrors the content of the
additional VHD attached volume from source VMs to additional VHD attached volume of target VM.
SIOS DataKeeper is abstracting the source and target local volumes and presenting them to Windows
Failover Cluster as a single shared disk.
You can find all details on how to install a Windows Failover Cluster with SIOS DataKeeper and SAP in the
Clustering SAP ASCS Instance using Windows Server Failover Cluster on Azure with SIOS DataKeeper white
paper.
High Availability for the SAP (A)SCS instance on Linux
As of Dec 2015 there is also no equivalent to shared disk WSFC for Linux VMs on Azure. Alternative solutions
using 3rd-party software like SIOS for Windows are not validated yet for running SAP on Linux on Azure.
High Availability for the SAP database instance
The typical SAP DBMS HA setup is based on two DBMS VMs where DBMS high-availability functionality is used
to replicate data from the active DBMS instance to the second VM into a passive DBMS instance.
High Availability and Disaster recovery functionality for DBMS in general as well as specific DBMS are described
in the DBMS Deployment Guide.
End-to-End High Availability for the Complete SAP System
Here are two examples of a complete SAP NetWeaver HA architecture in Azure - one for Windows and one for
Linux.
Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy
many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per
subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would
do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different
VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS
limits of Azure Storage Accounts in mind (https://azure.microsoft.com/documentation/articles/storage-
scalability-targets)
HA on W indow s
The following Azure constructs are used for the SAP NetWeaver system, to minimize impact by infrastructure
issues and host patching:
The complete system is deployed on Azure (required - DBMS layer, (A)SCS instance and complete
application layer need to run in the same location).
The complete system runs within one Azure subscription (required).
The complete system runs within one Azure Virtual Network (required).
The separation of the VMs of one SAP system into three Availability Sets is possible even with all the VMs
belonging to the same Virtual Network.
All VMs running DBMS instances of one SAP system are in one Availability Set. We assume that there is
more than one VM running DBMS instances per system since native DBMS high availability features are
used, like SQL Server AlwaysOn or Oracle Data Guard.
All VMs running DBMS instances use their own storage account. DBMS data and log files are replicated from
one storage account to another storage account using DBMS high availability functions that synchronize the
data. Unavailability of one storage account will cause unavailability of one SQL Windows cluster node, but
not the whole SQL Server service.
All VMs running (A)SCS instance of one SAP system are in one Availability Set. A Windows Server Failover
Cluster (WSFC) is configured inside of those VMs to protect the (A)SCS instance.
All VMs running (A)SCS instances use their own storage account. (A)SCS instance files and SAP global folder
are replicated from one storage account to another storage account using SIOS DataKeeper replication.
Unavailability of one storage account will cause unavailability of one (A)SCS Windows cluster node, but not
the whole (A)SCS service.
ALL the VMs representing the SAP application server layer are in a third Availability Set.
ALL the VMs running SAP application servers use their own storage account. Unavailability of one storage
account will cause unavailability of one SAP application server, where other SAP AS continue to run.
The following figure illustrated the same landscape using Managed Disks.
HA on Linux

The architecture for SAP HA on Linux on Azure is basically the same as for Windows as described above. As of
Jan 2016 there is no SAP (A)SCS HA solution supported yet on Linux on Azure
As a consequence as of January 2016 an SAP-Linux-Azure system cannot achieve the same availability as an
SAP-Windows-Azure system because of missing HA for the (A)SCS instance and the single-instance SAP ASE
database.
Using Autostart for SAP instances
SAP offered the functionality to start SAP instances immediately after the start of the OS within the VM. The
exact steps were documented in SAP Knowledge Base Article 1909114. However, SAP is not recommending to
use the setting anymore because there is no control in the order of instance restarts, assuming more than one
VM got affected or multiple instances ran per VM. Assuming a typical Azure scenario of one SAP application
server instance in a VM and the case of a single VM eventually getting restarted, the Autostart is not really
critical and can be enabled by adding this parameter:

Autostart = 1

Into the start profile of the SAP ABAP and/or Java instance.
NOTE
The Autostart parameter can have some downfalls as well. In more detail, the parameter triggers the start of an SAP
ABAP or Java instance when the related Windows/Linux service of the instance is started. That certainly is the case when
the operating system boots up. However, restarts of SAP services are also a common thing for SAP Software Lifecycle
Management functionality like SUM or other updates or upgrades. These functionalities are not expecting an instance to
be restarted automatically at all. Therefore, the Autostart parameter should be disabled before running such tasks. The
Autostart parameter also should not be used for SAP instances that are clustered, like ASCS/SCS/CI.

See additional information regarding autostart for SAP instances here:


Start/Stop SAP along with your Unix Server Start/Stop
Starting and Stopping SAP NetWeaver Management Agents
How to enable auto Start of HANA Database
Larger 3-Tier SAP systems
High-Availability aspects of 3-Tier SAP configurations got discussed in earlier sections already. But what about
systems where the DBMS server requirements are too large to have it located in Azure, but the SAP application
layer could be deployed into Azure?
Location of 3-Tier SAP configurations
It is not supported to split the application tier itself or the application and DBMS tier between on-premises and
Azure. An SAP system is either completely deployed on-premises OR in Azure. It is also not supported to have
some of the application servers run on-premises and some others in Azure. That is the starting point of the
discussion. We also are not supporting to have the DBMS components of an SAP system and the SAP
application server layer deployed in two different Azure Regions. For example DBMS in West US and SAP
application layer in Central US. Reason for not supporting such configurations is the latency sensitivity of the
SAP NetWeaver architecture.
However, over the course of last year data center partners developed co-locations to Azure Regions. These co-
locations often are in very close proximity to the physical Azure data centers within an Azure Region. The short
distance and connection of assets in the co-location through ExpressRoute into Azure can result in a latency that
is less than 2ms. In such cases, to locate the DBMS layer (including storage SAN/NAS) in such a co-location and
the SAP application layer in Azure is possible. As of Dec 2015, we dont have any deployments like that. But
different customers with non-SAP application deployments are using such approaches already.
Offline Backup of SAP systems
Dependent on the SAP configuration chosen (2-Tier or 3-Tier) there could be a need to back up. The content of
the VM itself plus to have a backup of the database. The DBMS-related backups are expected to be done with
database methods. A detailed description for the different databases, can be found in DBMS Guide. On the other
hand, the SAP data can be backed up in an offline manner (including the database content as well) as described
in this section or online as described in the next section.
The offline backup would basically require a shutdown of the VM through the Azure portal and a copy of the
base VM disk plus all attached disks to the VM. This would preserve a point in time image of the VM and its
associated disk. It is recommended to copy the backups into a different Azure Storage Account. Hence the
procedure described in chapter Copying disks between Azure Storage Accounts of this document would apply.
Besides the shutdown using the Azure portal one can also do it via Powershell or CLI as described here:
https://azure.microsoft.com/documentation/articles/virtual-machines-deploy-rmtemplates-powershell/
A restore of that state would consist of deleting the base VM as well as the original disks of the base VM and
mounted disks, copying back the saved disks to the original Storage Account or resource group for managed
disks and then redeploying the system. This article shows an example how to script this process in Powershell:
http://www.westerndevs.com/azure-snapshots/
Please make sure to install a new SAP license since restoring a VM backup as described above creates a new
hardware key.
Online backup of an SAP system
Backup of the DBMS is performed with DBMS-specific methods as described in the DBMS Guide.
Other VMs within the SAP system can be backed up using Azure Virtual Machine Backup functionality. Azure
Virtual Machine Backup got introduced early in 2015 and meanwhile is a standard method to back up a
complete VM in Azure. Azure Backup stores the backups in Azure and allows a restore of a VM again.

NOTE
As of Dec 2015 using VM Backup does NOT keep the unique VM ID which is used for SAP licensing. This means that a
restore from a VM backup requires installation of a new SAP license key as the restored VM is considered to be a new
VM and not a replacement of the former one which was saved.

Windows
Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system supports the
Windows VSS (Volume Shadow Copy Service
https://msdn.microsoft.com/library/windows/desktop/bb968832(v=vs.85).aspx) as, for example, SQL Server does.
However, be aware that based on Azure VM backups point-in-time restores of databases are not possible. Therefore, the
recommendation is to perform backups of databases with DBMS functionality instead of relying on Azure VM Backup.
To get familiar with Azure Virtual Machine Backup please start here: https://docs.microsoft.com/azure/backup/backup-
azure-vms.
Other possibilities are to use a combination of Microsoft Data Protection Manager installed in an Azure VM and Azure
Backup to backup/restore databases. More information can be found here:
https://docs.microsoft.com/azure/backup/backup-azure-dpm-introduction.

Linux
There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not application-
consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file system which includes the
SAP-related data can be saved, for example, using tar as described here:
http://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm

Azure as DR site for production SAP landscapes


Since Mid 2014, extensions to various components around Hyper-V, System Center and Azure enable the usage
of Azure as DR site for VMs running on-premises based on Hyper-V.
A blog detailing how to deploy this solution is documented here:
http://blogs.msdn.com/b/saponsqlserver/archive/2014/11/19/protecting-sap-solutions-with-azure-site-
recovery.aspx.

Summary
The key points of High Availability for SAP systems in Azure are:
At this point in time, the SAP single point of failure cannot be secured exactly the same way as it can be done
in on-premises deployments. The reason is that Shared Disk clusters cant yet be built in Azure without the
use of 3rd party software.
For the DBMS layer you need to use DBMS functionality that does not rely on shared disk cluster technology.
Details are documented in the DBMS Guide.
To minimize the impact of problems within Fault Domains in the Azure infrastructure or host maintenance,
you should use Azure Availability Sets:
It is recommended to have one Availability Set for the SAP application layer.
It is recommended to have a separate Availability Set for the SAP DBMS layer.
It is NOT recommended to apply the same Availability set for VMs of different SAP systems.
It is recommended to use Premium Managed Disks.
For Backup purposes of the SAP DBMS layer, please check the DBMS Guide.
Backing up SAP Dialog instances makes little sense since it is usually faster to redeploy simple dialog
instances.
Backing up the VM which contains the global directory of the SAP system and with it all the profiles of the
different instances, does make sense and should be performed with Windows Backup or, for example, tar on
Linux. Since there are differences between Windows Server 2008 (R2) and Windows Server 2012 (R2), which
make it easier to back up using the more recent Windows Server releases, we recommend to run Windows
Server 2012 (R2) as Windows guest operating system.
High availability for SAP NetWeaver on Azure VMs
8/21/2017 49 min to read Edit Online

Azure Virtual Machines is the solution for organizations that need compute, storage, and network resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classic
applications like SAP NetWeaver-based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so you
can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and SAP
system landscape.
In this article, we cover the steps that you can take to deploy high-availability SAP systems in Azure by using the
Azure Resource Manager deployment model. We walk you through these major tasks:
Find the right SAP Notes and installation guides, listed in the Resources section. This article complements SAP
installation documentation and SAP Notes, which are the primary resources that can help you install and deploy
SAP software on specific platforms.
Learn the differences between the Azure Resource Manager deployment model and the Azure classic
deployment model.
Learn about Windows Server Failover Clustering quorum modes, so you can select the model that is right for
your Azure deployment.
Learn about Windows Server Failover Clustering shared storage in Azure services.
Learn how to help protect single-point-of-failure components like Advanced Business Application
Programming (ABAP) SAP Central Services (ASCS)/SAP Central Services (SCS) and database management
systems (DBMS), and redundant components like SAP Application Server, in Azure.
Follow a step-by-step example of an installation and configuration of a high-availability SAP system in a
Windows Server Failover Clustering cluster in Azure by using Azure Resource Manager.
Learn about additional steps required to use Windows Server Failover Clustering in Azure, but which are not
needed in an on-premises deployment.
To simplify deployment and configuration, in this article, we use the SAP three-tier high-availability Resource
Manager templates. The templates automate deployment of the entire infrastructure that you need for a high-
availability SAP system. The infrastructure also supports SAP Application Performance Standard (SAPS) sizing of
your SAP system.

Prerequisites
Before you start, make sure that you meet the prerequisites that are described in the following sections. Also, be
sure to check all resources listed in the Resources section.
In this article, we use Azure Resource Manager templates for three-tier SAP NetWeaver. For a helpful overview of
templates, see SAP Azure Resource Manager templates.

Resources
These articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver
Azure Virtual Machines high availability for SAP NetWeaver (this guide)
NOTE
Whenever possible, we give you a link to the referring SAP installation guide (see the SAP installation guides). For
prerequisites and information about the installation process, it's a good idea to read the SAP NetWeaver installation guides
carefully. This article covers only specific tasks for SAP NetWeaver-based systems that you can use with Azure Virtual
Machines.

These SAP Notes are related to the topic of SAP in Azure:

NOTE NUMBER TITLE

1928533 SAP Applications on Azure: Supported Products and Sizing

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure

1999351 Virtualization on Windows: Enhanced Monitoring

2243692 Use of Azure Premium SSD Storage for SAP DBMS Instance

Learn more about the limitations of Azure subscriptions, including general default limitations and maximum
limitations.

High-availability SAP with Azure Resource Manager vs. the Azure


classic deployment model
The Azure Resource Manager and Azure classic deployment models are different in the following areas:
Resource groups
Azure internal load balancer dependency on the Azure resource group
Support for SAP multi-SID scenarios
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application resources in your Azure
subscription. An integrated approach, in a resource group, all resources have the same life cycle. For example, all
resources are created at the same time and they are deleted at the same time. Learn more about resource groups.
Azure internal load balancer dependency on the Azure resource group
In the Azure classic deployment model, there is a dependency between the Azure internal load balancer (Azure
Load Balancer service) and the cloud service group. Every internal load balancer needs one cloud service group.
In Azure Resource Manager, you don't need an Azure resource group to use Azure Load Balancer. The environment
is simpler and more flexible.
Support for SAP multi-SID scenarios
In Azure Resource Manager, you can install multiple SAP system identifier (SID) ASCS/SCS instances in one cluster.
Multi-SID instances are possible because of support for multiple IP addresses for each Azure internal load balancer.
To use the Azure classic deployment model, follow the procedures described in SAP NetWeaver in Azure:
Clustering SAP ASCS/SCS instances by using Windows Server Failover Clustering in Azure with SIOS DataKeeper.
IMPORTANT
We strongly recommend that you use the Azure Resource Manager deployment model for your SAP installations. It offers
many benefits that are not available in the classic deployment model. Learn more about Azure deployment models.

Windows Server Failover Clustering


Windows Server Failover Clustering is the foundation of a high-availability SAP ASCS/SCS installation and DBMS
in Windows.
A failover cluster is a group of 1+n independent servers (nodes) that work together to increase the availability of
applications and services. If a node failure occurs, Windows Server Failover Clustering calculates the number of
failures that can occur while maintaining a healthy cluster to provide applications and services. You can choose
from different quorum modes to achieve failover clustering.
Quorum modes
You can choose from four quorum modes when you use Windows Server Failover Clustering:
Node Majority. Each node of the cluster can vote. The cluster functions only with a majority of votes, that is,
with more than half the votes. We recommend this option for clusters that have an uneven number of nodes.
For example, three nodes in a seven-node cluster can fail, and the cluster stills achieves a majority and
continues to run.
Node and Disk Majority. Each node and a designated disk (a disk witness) in the cluster storage can vote
when they are available and in communication. The cluster functions only with a majority of the votes, that is,
with more than half the votes. This mode makes sense in a cluster environment with an even number of nodes.
If half the nodes and the disk are online, the cluster remains in a healthy state.
Node and File Share Majority. Each node plus a designated file share (a file share witness) that the
administrator creates can vote, regardless of whether the nodes and file share are available and in
communication. The cluster functions only with a majority of the votes, that is, with more than half the votes.
This mode makes sense in a cluster environment with an even number of nodes. It's similar to the Node and
Disk Majority mode, but it uses a witness file share instead of a witness disk. This mode is easy to implement,
but if the file share itself is not highly available, it might become a single point of failure.
No Majority: Disk Only. The cluster has a quorum if one node is available and in communication with a
specific disk in the cluster storage. Only the nodes that also are in communication with that disk can join the
cluster. We recommend that you do not use this mode.

Windows Server Failover Clustering on-premises


Figure 1 shows a cluster of two nodes. If the network connection between the nodes fails and both nodes stay up
and running, a quorum disk or file share determines which node will continue to provide the cluster's applications
and services. The node that has access to the quorum disk or file share is the node that ensures that services
continue.
Because this example uses a two-node cluster, we use the Node and File Share Majority quorum mode. The Node
and Disk Majority also is a valid option. In a production environment, we recommend that you use a quorum disk.
You can use network and storage system technology to make it highly available.
Figure 1: Example of a Windows Server Failover Clustering configuration for SAP ASCS/SCS in Azure
Shared storage
Figure 1 also shows a two-node shared storage cluster. In an on-premises shared storage cluster, all nodes in the
cluster detect shared storage. A locking mechanism protects the data from corruption. All nodes can detect if
another node fails. If one node fails, the remaining node takes ownership of the storage resources and ensures the
availability of services.

NOTE
You don't need shared disks for high availability with some DBMS applications, like with SQL Server. SQL Server Always On
replicates DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In that
case, the Windows cluster configuration doesn't need a shared disk.

Networking and name resolution


Client computers reach the cluster over a virtual IP address and a virtual host name that the DNS server provides.
The on-premises nodes and the DNS server can handle multiple IP addresses.
In a typical setup, you use two or more network connections:
A dedicated connection to the storage
A cluster-internal network connection for the heartbeat
A public network that clients use to connect to the cluster

Windows Server Failover Clustering in Azure


Compared to bare metal or private cloud deployments, Azure Virtual Machines requires additional steps to
configure Windows Server Failover Clustering. When you build a shared cluster disk, you need to set several IP
addresses and virtual host names for the SAP ASCS/SCS instance.
In this article, we discuss key concepts and the additional steps required to build an SAP high-availability central
services cluster in Azure. We show you how to set up the third-party tool SIOS DataKeeper, and how to configure
the Azure internal load balancer. You can use these tools to create a Windows failover cluster with a file share
witness in Azure.

Figure 2: Windows Server Failover Clustering configuration in Azure without a shared disk
Shared disk in Azure with SIOS DataKeeper
You need cluster shared storage for a high-availability SAP ASCS/SCS instance. As of September 2016, Azure
doesn't offer shared storage that you can use to create a shared storage cluster. You can use third-party software
SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster shared storage. The SIOS
solution provides real-time synchronous data replication. This is how you can create a shared disk resource for a
cluster:
1. Attach an additional Azure virtual hard disk (VHD) to each of the virtual machines (VMs) in a Windows cluster
configuration.
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes.
3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional VHD attached volume
from the source virtual machine to the additional VHD attached volume of the target virtual machine. SIOS
DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server Failover
Clustering as one shared disk.
Get more information about SIOS DataKeeper.
Figure 3: Windows Server Failover Clustering configuration in Azure with SIOS DataKeeper

NOTE
You don't need shared disks for high availability with some DBMS products, like SQL Server. SQL Server Always On replicates
DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In this case, the
Windows cluster configuration doesn't need a shared disk.

Name resolution in Azure


The Azure cloud platform doesn't offer the option to configure virtual IP addresses, such as floating IP addresses.
You need an alternative solution to set up a virtual IP address to reach the cluster resource in the cloud. Azure has
an internal load balancer in the Azure Load Balancer service. With the internal load balancer, clients reach the
cluster over the cluster virtual IP address. You need to deploy the internal load balancer in the resource group that
contains the cluster nodes. Then, configure all necessary port forwarding rules with the probe ports of the internal
load balancer. The clients can connect via the virtual host name. The DNS server resolves the cluster IP address,
and the internal load balancer handles port forwarding to the active node of the cluster.

SAP NetWeaver high availability in Azure Infrastructure-as-a-Service


(IaaS)
To achieve SAP application high availability, such as for SAP software components, you need to protect the
following components:
SAP Application Server instance
SAP ASCS/SCS instance
DBMS server
For more information about protecting SAP components in high-availability scenarios, see Azure Virtual Machines
planning and implementation for SAP NetWeaver.
High-availability SAP Application Server
You usually don't need a specific high-availability solution for the SAP Application Server and dialog instances. You
achieve high availability by redundancy, and you'll configure multiple dialog instances in different instances of
Azure Virtual Machines. You should have at least two SAP application instances installed in two instances of Azure
Virtual Machines.

Figure 4: High-availability SAP Application Server


You must place all virtual machines that host SAP Application Server instances in the same Azure availability set.
An Azure availability set ensures that:
All virtual machines are part of the same upgrade domain. An upgrade domain, for example, makes sure that
the virtual machines aren't updated at the same time during planned maintenance downtime.
All virtual machines are part of the same fault domain. A fault domain, for example, makes sure that virtual
machines are deployed so that no single point of failure affects the availability of all virtual machines.
Learn more about how to manage the availability of virtual machines.
Because the Azure storage account is a potential single point of failure, it's important to have at least two Azure
storage accounts, in which at least two virtual machines are distributed. In an ideal setup, the disks of each virtual
machine that is running an SAP dialog instance would be deployed in a different storage account.
High-availability SAP ASCS/SCS instance
Figure 5 is an example of a high-availability SAP ASCS/SCS instance.
Figure 5: High-availability SAP ASCS/SCS instance
SAP ASCS/SCS instance high availability with Windows Server Failover Clustering in Azure
Compared to bare metal or private cloud deployments, Azure Virtual Machines requires additional steps to
configure Windows Server Failover Clustering. To build a Windows failover cluster, you need a shared cluster disk,
several IP addresses, several virtual host names, and an Azure internal load balancer for clustering an SAP
ASCS/SCS instance. We discuss this in more detail later in the article.

Figure 6: Windows Server Failover Clustering for an SAP ASCS/SCS configuration in Azure with SIOS DataKeeper
High-availability DBMS instance
The DBMS also is a single point of contact in an SAP system. You need to protect it by using a high-availability
solution. Figure 7 shows a SQL Server Always On high-availability solution in Azure, with Windows Server Failover
Clustering and the Azure internal load balancer. SQL Server Always On replicates DBMS data and log files by using
its own DBMS replication. In this case, you don't need cluster shared disks, which simplifies the entire setup.

Figure 7: Example of a high-availability SAP DBMS, with SQL Server Always On


For more information about clustering SQL Server in Azure by using the Azure Resource Manager deployment
model, see these articles:
Configure Always On availability group in Azure Virtual Machines manually by using Resource Manager
Configure an Azure internal load balancer for an Always On availability group in Azure

End-to-end high-availability deployment scenarios


Deployment scenario using Architectural Template 1
Figure 8 shows an example of an SAP NetWeaver high-availability architecture in Azure for one SAP system. This
scenario is set up as follows:
One dedicated cluster is used for the SAP ASCS/SCS instance.
One dedicated cluster is used for the DBMS instance.
SAP Application Server instances are deployed in their own dedicated VMs.
Figure 8: SAP high-availability Architectural Template 1, dedicated clusters for ASCS/SCS and DBMS
Deployment scenario using Architectural Template 2
Figure 9 shows an example of an SAP NetWeaver high-availability architecture in Azure for one SAP system. This
scenario is set up as follows:
One dedicated cluster is used for both the SAP ASCS/SCS instance and the DBMS.
SAP Application Server instances are deployed in own dedicated VMs.
Figure 9: SAP high-availability Architectural Template 2, with a dedicated cluster for ASCS/SCS and a dedicated
cluster for DBMS
Deployment scenario using Architectural Template 3
Figure 10 shows an example of an SAP NetWeaver high-availability architecture in Azure for two SAP systems,
with <SID1> and <SID2>. This scenario is set up as follows:
One dedicated cluster is used for both the SAP ASCS/SCS SID1 instance and the SAP ASCS/SCS SID2 instance
(one cluster).
One dedicated cluster is used for DBMS SID1, and another dedicated cluster is used for DBMS SID2 (two
clusters).
SAP Application Server instances for the SAP system SID1 have their own dedicated VMs.
SAP Application Server instances for the SAP system SID2 have their own dedicated VMs.
Figure 10: SAP high-availability Architectural Template 3, with a dedicated cluster for different ASCS/SCS instances

Prepare the infrastructure


Prepare the infrastructure for Architectural Template 1
Azure Resource Manager templates for SAP help simplify deployment of required resources.
The three-tier templates in Azure Resource Manager also support high-availability scenarios, such as in
Architectural Template 1, which has two clusters. Each cluster is an SAP single point of failure for SAP ASCS/SCS
and DBMS.
Here's where you can get Azure Resource Manager templates for the example scenario we describe in this article:
Azure Marketplace image
Custom image
To prepare the infrastructure for Architectural Template 1:
In the Azure portal, on the Parameters blade, in the SYSTEMAVAILABILITY box, select HA.
Figure 11: Set SAP high-availability Azure Resource Manager parameters
The templates create:
Virtual machines:
SAP Application Server virtual machines: <SAPSystemSID>-di-<Number>
ASCS/SCS cluster virtual machines: <SAPSystemSID>-ascs-<Number>
DBMS cluster: <SAPSystemSID>-db-<Number>
Network cards for all virtual machines, with associated IP addresses:
<SAPSystemSID>-nic-di-<Number>
<SAPSystemSID>-nic-ascs-<Number>
<SAPSystemSID>-nic-db-<Number>
Azure storage accounts
Availability groups for:
SAP Application Server virtual machines: <SAPSystemSID>-avset-di
SAP ASCS/SCS cluster virtual machines: <SAPSystemSID>-avset-ascs
DBMS cluster virtual machines: <SAPSystemSID>-avset-db
Azure internal load balancer:
With all ports for the ASCS/SCS instance and IP address <SAPSystemSID>-lb-ascs
With all ports for the SQL Server DBMS and IP address <SAPSystemSID>-lb-db
Network security group: <SAPSystemSID>-nsg-ascs-0
With an open external Remote Desktop Protocol (RDP) port to the <SAPSystemSID>-ascs-0 virtual
machine

NOTE
All IP addresses of the network cards and Azure internal load balancers are dynamic by default. Change them to static IP
addresses. We describe how to do this later in the article.

Deploy virtual machines with corporate network connectivity (cross-premises) to use in production
For production SAP systems, deploy Azure virtual machines with corporate network connectivity (cross-premises)
by using Azure Site-to-Site VPN or Azure ExpressRoute.

NOTE
You can use your Azure Virtual Network instance. The virtual network and subnet have already been created and prepared.

1. In the Azure portal, on the Parameters blade, in the NEWOREXISTINGSUBNET box, select existing.
2. In the SUBNETID box, add the full string of your prepared Azure network SubnetID where you plan to deploy
your Azure virtual machines.
3. To get a list of all Azure network subnets, run this PowerShell command:

(Get-AzureRmVirtualNetwork -Name <azureVnetName> -ResourceGroupName <ResourceGroupOfVNET>).Subnets

The ID field shows the SUBNETID.


4. To get a list of all SUBNETID values, run this PowerShell command:

(Get-AzureRmVirtualNetwork -Name <azureVnetName> -ResourceGroupName <ResourceGroupOfVNET>).Subnets.Id

The SUBNETID looks like this:

/subscriptions/<SubscriptionId>/resourceGroups/<VPNName>/providers/Microsoft.Network/virtualNetworks/az
ureVnet/subnets/<SubnetName>

Deploy cloud-only SAP instances for test and demo


You can deploy your high-availability SAP system in a cloud-only deployment model. This kind of deployment
primarily is useful for demo and test use cases. It's not suited for production use cases.
In the Azure portal, on the Parameters blade, in the NEWOREXISTINGSUBNET box, select new. Leave the
SUBNETID field empty.
The SAP Azure Resource Manager template automatically creates the Azure virtual network and subnet.

NOTE
You also need to deploy at least one dedicated virtual machine for Active Directory and DNS in the same Azure Virtual
Network instance. The template doesn't create these virtual machines.

Prepare the infrastructure for Architectural Template 2


You can use this Azure Resource Manager template for SAP to help simplify deployment of required infrastructure
resources for SAP Architectural Template 2.
Here's where you can get Azure Resource Manager templates for this deployment scenario:
Azure Marketplace image
Custom image
Prepare the infrastructure for Architectural Template 3
You can prepare the infrastructure and configure SAP for multi-SID. For example, you can add an additional SAP
ASCS/SCS instance into an existing cluster configuration. For more information, see Configure an additional SAP
ASCS/SCS instance into an existing cluster configuration to create an SAP multi-SID configuration in Azure
Resource Manager.
If you want to create a new multi-SID cluster, you can use the multi-SID quickstart templates on GitHub. To create a
new multi-SID cluster, you need to deploy the following three templates:
ASCS/SCS template
Database template
Application servers template
The following sections have more details about the templates and the parameters you need to provide in the
templates.
ASCS/SCS template
The ASCS/SCS template deploys two virtual machines that you can use to create a Windows Server failover cluster
that hosts multiple ASCS/SCS instances.
To set up the ASCS/SCS multi-SID template, in the ASCS/SCS multi-SID template, enter values for the following
parameters:
Resource Prefix. Set the resource prefix, which is used to prefix all resources that are created during the
deployment. Because the resources do not belong to only one SAP system, the prefix of the resource is not the
SID of one SAP system. The prefix must be between three and six characters.
Stack Type. Select the stack type of the SAP system. Depending on the stack type, Azure Load Balancer has one
(ABAP or Java only) or two (ABAP+Java) private IP addresses per SAP system.
OS Type. Select the operating system of the virtual machines.
SAP System Count. Select the number of SAP systems you want to install in this cluster.
System Availability. Select HA.
Admin Username and Admin Password. Create a new user that can be used to sign in to the machine.
New Or Existing Subnet. Set whether a new virtual network and subnet should be created, or an existing
subnet should be used. If you already have a virtual network that is connected to your on-premises network,
select existing.
Subnet Id. Set the ID of the subnet to which the virtual machines should be connected. Select the subnet of
your virtual private network (VPN) or ExpressRoute virtual network to connect the virtual machine to your
on-premises network. The ID usually looks like this:
/subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet name>
The template deploys one Azure Load Balancer instance, which supports multiple SAP systems.
The ASCS instances are configured for instance number 00, 10, 20...
The SCS instances are configured for instance number 01, 11, 21...
The ASCS Enqueue Replication Server (ERS) (Linux only) instances are configured for instance number 02, 12,
22...
The SCS ERS (Linux only) instances are configured for instance number 03, 13, 23...
The load balancer contains 1 (2 for Linux) VIP(s), 1x VIP for ASCS/SCS and 1x VIP for ERS (Linux only).
The following list contains all load balancing rules (where x is the number of the SAP system, for example, 1, 2, 3...):
Windows-specific ports for every SAP system: 445, 5985
ASCS ports (instance number x0): 32x0, 36x0, 39x0, 81x0, 5x013, 5x014, 5x016
SCS ports (instance number x1): 32x1, 33x1, 39x1, 81x1, 5x113, 5x114, 5x116
ASCS ERS ports on Linux (instance number x2): 33x2, 5x213, 5x214, 5x216
SCS ERS ports on Linux (instance number x3): 33x3, 5x313, 5x314, 5x316
The load balancer is configured to use the following probe ports (where x is the number of the SAP system, for
example, 1, 2, 3...):
ASCS/SCS internal load balancer probe port: 620x0
ERS internal load balancer probe port (Linux only): 621x2
Database template
The database template deploys one or two virtual machines that you can use to install the relational database
management system (RDBMS) for one SAP system. For example, if you deploy an ASCS/SCS template for five SAP
systems, you need to deploy this template five times.
To set up the database multi-SID template, in the database multi-SID template, enter values for the following
parameters:
Sap System Id. Enter the SAP system ID of the SAP system you want to install. The ID will be used as a prefix
for the resources that are deployed.
Os Type. Select the operating system of the virtual machines.
Dbtype. Select the type of the database you want to install on the cluster. Select SQL if you want to install
Microsoft SQL Server. Select HANA if you plan to install SAP HANA on the virtual machines. Make sure to
select the correct operating system type: select Windows for SQL, and select a Linux distribution for HANA. The
Azure Load Balancer that is connected to the virtual machines will be configured to support the selected
database type:
SQL. The load balancer will load-balance port 1433. Make sure to use this port for your SQL Server
Always On setup.
HANA. The load balancer will load-balance ports 35015 and 35017. Make sure to install SAP HANA with
instance number 50. The load balancer will use probe port 62550.
Sap System Size. Set the number of SAPS the new system will provide. If you are not sure how many SAPS the
system will require, ask your SAP Technology Partner or System Integrator.
System Availability. Select HA.
Admin Username and Admin Password. Create a new user that can be used to sign in to the machine.
Subnet Id. Enter the ID of the subnet that you used during the deployment of the ASCS/SCS template, or the ID
of the subnet that was created as part of the ASCS/SCS template deployment.
Application servers template
The application servers template deploys two or more virtual machines that can be used as SAP Application Server
instances for one SAP system. For example, if you deploy an ASCS/SCS template for five SAP systems, you need to
deploy this template five times.
To set up the application servers multi-SID template, in the application servers multi-SID template, enter values for
the following parameters:
Sap System Id. Enter the SAP system ID of the SAP system you want to install. The ID will be used as a prefix
for the resources that are deployed.
Os Type. Select the operating system of the virtual machines.
Sap System Size. The number of SAPS the new system will provide. If you are not sure how many SAPS the
system will require, ask your SAP Technology Partner or System Integrator.
System Availability. Select HA.
Admin Username and Admin Password. Create a new user that can be used to sign in to the machine.
Subnet Id. Enter the ID of the subnet that you used during the deployment of the ASCS/SCS template, or the ID
of the subnet that was created as part of the ASCS/SCS template deployment.
Azure virtual network
In our example, the address space of the Azure virtual network is 10.0.0.0/16. There is one subnet called Subnet,
with an address range of 10.0.0.0/24. All virtual machines and internal load balancers are deployed in this virtual
network.

IMPORTANT
Don't make any changes to the network settings inside the guest operating system. This includes IP addresses, DNS servers,
and subnet. Configure all your network settings in Azure. The Dynamic Host Configuration Protocol (DHCP) service
propagates your settings.

DNS IP addresses
To set the required DNS IP addresses, do the following steps.
1. In the Azure portal, on the DNS servers blade, make sure that your virtual network DNS servers option is set
to Custom DNS.
2. Select your settings based on the type of network you have. For more information, see the following
resources:
Corporate network connectivity (cross-premises): Add the IP addresses of the on-premises DNS servers.
You can extend on-premises DNS servers to the virtual machines that are running in Azure. In that
scenario, you can add the IP addresses of the Azure virtual machines on which you run the DNS service.
Cloud-only deployment: Deploy an additional virtual machine in the same Virtual Network instance that
serves as a DNS server. Add the IP addresses of the Azure virtual machines that you've set up to run DNS
service.

Figure 12: Configure DNS servers for Azure Virtual Network

NOTE
If you change the IP addresses of the DNS servers, you need to restart the Azure virtual machines to apply the
change and propagate the new DNS servers.

In our example, the DNS service is installed and configured on these Windows virtual machines:
VIRTUAL MACHINE HOST
VIRTUAL MACHINE ROLE NAME NETWORK CARD NAME STATIC IP ADDRESS

First DNS server domcontr-0 pr1-nic-domcontr-0 10.0.0.10

Second DNS server domcontr-1 pr1-nic-domcontr-1 10.0.0.11

Host names and static IP addresses for the SAP ASCS/SCS clustered instance and DBMS clustered instance
For on-premises deployment, you need these reserved host names and IP addresses:

VIRTUAL HOST NAME ROLE VIRTUAL HOST NAME VIRTUAL STATIC IP ADDRESS

SAP ASCS/SCS first cluster virtual host pr1-ascs-vir 10.0.0.42


name (for cluster management)

SAP ASCS/SCS instance virtual host pr1-ascs-sap 10.0.0.43


name

SAP DBMS second cluster virtual host pr1-dbms-vir 10.0.0.32


name (cluster management)

When you create the cluster, create the virtual host names pr1-ascs-vir and pr1-dbms-vir and the associated IP
addresses that manage the cluster itself. For information about how to do this, see Collect cluster nodes in a cluster
configuration.
You can manually create the other two virtual host names, pr1-ascs-sap and pr1-dbms-sap, and the associated IP
addresses, on the DNS server. The clustered SAP ASCS/SCS instance and the clustered DBMS instance use these
resources. For information about how to do this, see Create a virtual host name for a clustered SAP ASCS/SCS
instance.
Set static IP addresses for the SAP virtual machines
After you deploy the virtual machines to use in your cluster, you need to set static IP addresses for all virtual
machines. Do this in the Azure Virtual Network configuration, and not in the guest operating system.
1. In the Azure portal, select Resource Group > Network Card > Settings > IP Address.
2. On the IP addresses blade, under Assignment, select Static. In the IP address box, enter the IP address
that you want to use.

NOTE
If you change the IP address of the network card, you need to restart the Azure virtual machines to apply the
change.
Figure 13: Set static IP addresses for the network card of each virtual machine
Repeat this step for all network interfaces, that is, for all virtual machines, including virtual machines that
you want to use for your Active Directory/DNS service.
In our example, we have these virtual machines and static IP addresses:

VIRTUAL MACHINE HOST


VIRTUAL MACHINE ROLE NAME NETWORK CARD NAME STATIC IP ADDRESS

First SAP Application Server pr1-di-0 pr1-nic-di-0 10.0.0.50


instance

Second SAP Application pr1-di-1 pr1-nic-di-1 10.0.0.51


Server instance

... ... ... ...

Last SAP Application Server pr1-di-5 pr1-nic-di-5 10.0.0.55


instance

First cluster node for pr1-ascs-0 pr1-nic-ascs-0 10.0.0.40


ASCS/SCS instance

Second cluster node for pr1-ascs-1 pr1-nic-ascs-1 10.0.0.41


ASCS/SCS instance

First cluster node for DBMS pr1-db-0 pr1-nic-db-0 10.0.0.30


instance

Second cluster node for pr1-db-1 pr1-nic-db-1 10.0.0.31


DBMS instance

Set a static IP address for the Azure internal load balancer


The SAP Azure Resource Manager template creates an Azure internal load balancer that is used for the SAP
ASCS/SCS instance cluster and the DBMS cluster.
IMPORTANT
The IP address of the virtual host name of the SAP ASCS/SCS is the same as the IP address of the SAP ASCS/SCS internal
load balancer: pr1-lb-ascs. The IP address of the virtual name of the DBMS is the same as the IP address of the DBMS
internal load balancer: pr1-lb-dbms.

To set a static IP address for the Azure internal load balancer:


1. The initial deployment sets the internal load balancer IP address to Dynamic. In the Azure portal, on the IP
addresses blade, under Assignment, select Static.
2. Set the IP address of the internal load balancer pr1-lb-ascs to the IP address of the virtual host name of the
SAP ASCS/SCS instance.
3. Set the IP address of the internal load balancer pr1-lb-dbms to the IP address of the virtual host name of
the DBMS instance.

Figure 14: Set static IP addresses for the internal load balancer for the SAP ASCS/SCS instance
In our example, we have two Azure internal load balancers that have these static IP addresses:

AZURE INTERNAL LOAD BALANCER ROLE AZURE INTERNAL LOAD BALANCER NAME STATIC IP ADDRESS

SAP ASCS/SCS instance internal load pr1-lb-ascs 10.0.0.43


balancer

SAP DBMS internal load balancer pr1-lb-dbms 10.0.0.33

Default ASCS/SCS load balancing rules for the Azure internal load balancer
The SAP Azure Resource Manager template creates the ports you need:
An ABAP ASCS instance, with the default instance number 00
A Java SCS instance, with the default instance number 01
When you install your SAP ASCS/SCS instance, you must use the default instance number 00 for your ABAP ASCS
instance and the default instance number 01 for your Java SCS instance.
Next, create required internal load balancing endpoints for the SAP NetWeaver ports.
To create required internal load balancing endpoints, first, create these load balancing endpoints for the SAP
NetWeaver ABAP ASCS ports:
CONCRETE PORTS FOR (ASCS INSTANCE
WITH INSTANCE NUMBER 00) (ERS WITH
SERVICE/LOAD BALANCING RULE NAME DEFAULT PORT NUMBERS 10)

Enqueue Server / lbrule3200 32<InstanceNumber> 3200

ABAP Message Server / lbrule3600 36<InstanceNumber> 3600

Internal ABAP Message / lbrule3900 39<InstanceNumber> 3900

Message Server HTTP / Lbrule8100 81<InstanceNumber> 8100

SAP Start Service ASCS HTTP / 5<InstanceNumber>13 50013


Lbrule50013

SAP Start Service ASCS HTTPS / 5<InstanceNumber>14 50014


Lbrule50014

Enqueue Replication / Lbrule50016 5<InstanceNumber>16 50016

SAP Start Service ERS HTTP 5<InstanceNumber>13 51013


Lbrule51013

SAP Start Service ERS HTTP 5<InstanceNumber>14 51014


Lbrule51014

Win RM Lbrule5985 5985

File Share Lbrule445 445

Table 1: Port numbers of the SAP NetWeaver ABAP ASCS instances


Then, create these load balancing endpoints for the SAP NetWeaver Java SCS ports:

CONCRETE PORTS FOR (SCS INSTANCE


WITH INSTANCE NUMBER 01) (ERS WITH
SERVICE/LOAD BALANCING RULE NAME DEFAULT PORT NUMBERS 11)

Enqueue Server / lbrule3201 32<InstanceNumber> 3201

Gateway Server / lbrule3301 33<InstanceNumber> 3301

Java Message Server / lbrule3900 39<InstanceNumber> 3901

Message Server HTTP / Lbrule8101 81<InstanceNumber> 8101

SAP Start Service SCS HTTP / 5<InstanceNumber>13 50113


Lbrule50113

SAP Start Service SCS HTTPS / 5<InstanceNumber>14 50114


Lbrule50114

Enqueue Replication / Lbrule50116 5<InstanceNumber>16 50116


CONCRETE PORTS FOR (SCS INSTANCE
WITH INSTANCE NUMBER 01) (ERS WITH
SERVICE/LOAD BALANCING RULE NAME DEFAULT PORT NUMBERS 11)

SAP Start Service ERS HTTP 5<InstanceNumber>13 51113


Lbrule51113

SAP Start Service ERS HTTP 5<InstanceNumber>14 51114


Lbrule51114

Win RM Lbrule5985 5985

File Share Lbrule445 445

Table 2: Port numbers of the SAP NetWeaver Java SCS instances

Figure 15: Default ASCS/SCS load balancing rules for the Azure internal load balancer
Set the IP address of the load balancer pr1-lb-dbms to the IP address of the virtual host name of the DBMS
instance.
Change the ASCS/SCS default load balancing rules for the Azure internal load balancer
If you want to use different numbers for the SAP ASCS or SCS instances, you must change the names and values of
their ports from default values.
1. In the Azure portal, select <SID>-lb-ascs load balancer > Load Balancing Rules.
2. For all load balancing rules that belong to the SAP ASCS or SCS instance, change these values:
Name
Port
Back-end port
For example, if you want to change the default ASCS instance number from 00 to 31, you need to make the
changes for all ports listed in Table 1.
Here's an example of an update for port lbrule3200.

Figure 16: Change the ASCS/SCS default load balancing rules for the Azure internal load balancer
Add Windows virtual machines to the domain
After you assign a static IP address to the virtual machines, add the virtual machines to the domain.

Figure 17: Add a virtual machine to a domain


Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
Azure Load Balancer has an internal load balancer that closes connections when the connections are idle for a set
period of time (an idle timeout). SAP work processes in dialog instances open connections to the SAP enqueue
process as soon as the first enqueue/dequeue request needs to be sent. These connections usually remain
established until the work process or the enqueue process restarts. However, if the connection is idle for a set
period of time, the Azure internal load balancer closes the connections. This isn't a problem because the SAP work
process reestablishes the connection to the enqueue process if it no longer exists. These activities are documented
in the developer traces of SAP processes, but they create a large amount of extra content in those traces. It's a good
idea to change the TCP/IP KeepAliveTime and KeepAliveInterval on both cluster nodes. Combine these changes in
the TCP/IP parameters with SAP profile parameters, described later in the article.
To add registry entries on both cluster nodes of the SAP ASCS/SCS instance, first, add these Windows registry
entries on both Windows cluster nodes for SAP ASCS/SCS:

HKLM\SYSTEM\CURRENTCONTROLSET\SERVICES\TCPIP\PARAMETE
PATH RS

Variable name KeepAliveTime

Variable type REG_DWORD (Decimal)

Value 120000

Link to documentation https://technet.microsoft.com/en-us/library/cc957549.aspx

Table 3: Change the first TCP/IP parameter


Then, add this Windows registry entries on both Windows cluster nodes for SAP ASCS/SCS:

HKLM\SYSTEM\CURRENTCONTROLSET\SERVICES\TCPIP\PARAMETE
PATH RS

Variable name KeepAliveInterval

Variable type REG_DWORD (Decimal)

Value 120000

Link to documentation https://technet.microsoft.com/en-us/library/cc957548.aspx

Table 4: Change the second TCP/IP parameter


To apply the changes, restart both cluster nodes.
Set up a Windows Server Failover Clustering cluster for an SAP ASCS/SCS instance
Setting up a Windows Server Failover Clustering cluster for an SAP ASCS/SCS instance involves these tasks:
Collecting the cluster nodes in a cluster configuration
Configuring a cluster file share witness
Collect the cluster nodes in a cluster configuration
1. In the Add Role and Features Wizard, add failover clustering to both cluster nodes.
2. Set up the failover cluster by using Failover Cluster Manager. In Failover Cluster Manager, select Create
Cluster, and then add only the name of the first cluster, node A. Do not add the second node yet; you'll add
the second node in a later step.
Figure 18: Add the server or virtual machine name of the first cluster node
3. Enter the network name (virtual host name) of the cluster.

Figure 19: Enter the cluster name


4. After you've created the cluster, run a cluster validation test.

Figure 20: Run the cluster validation check


You can ignore any warnings about disks at this point in the process. You'll add a file share witness and the
SIOS shared disks later. At this stage, you don't need to worry about having a quorum.
Figure 21: No quorum disk is found

Figure 22: Core cluster resource needs a new IP address


5. Change the IP address of the core cluster service. The cluster can't start until you change the IP address of
the core cluster service, because the IP address of the server points to one of the virtual machine nodes. Do
this on the Properties page of the core cluster service's IP resource.
For example, we need to assign an IP address (in our example, 10.0.0.42) for the cluster virtual host name
pr1-ascs-vir.

Figure 23: In the Properties dialog box, change the IP address


Figure 24: Assign the IP address that is reserved for the cluster
6. Bring the cluster virtual host name online.

Figure 25: Cluster core service is up and running, and with the correct IP address
7. Add the second cluster node.
Now that the core cluster service is up and running, you can add the second cluster node.

Figure 26: Add the second cluster node


8. Enter a name for the second cluster node host.
Figure 27: Enter the second cluster node host name

IMPORTANT
Be sure that the Add all eligible storage to the cluster check box is NOT selected.

Figure 28: Do not select the check box


You can ignore warnings about quorum and disks. You'll set the quorum and share the disk later, as
described in Installing SIOS DataKeeper Cluster Edition for SAP ASCS/SCS cluster share disk.
Figure 29: Ignore warnings about the disk quorum
Configure a cluster file share witness
Configuring a cluster file share witness involves these tasks:
Creating a file share
Setting the file share witness quorum in Failover Cluster Manager
C r e a t e a fi l e sh a r e

1. Select a file share witness instead of a quorum disk. SIOS DataKeeper supports this option.
In the examples in this article, the file share witness is on the Active Directory/DNS server that is running in
Azure. The file share witness is called domcontr-0. Because you would have configured a VPN connection
to Azure (via Site-to-Site VPN or Azure ExpressRoute), your Active Directory/DNS service is on-premises
and isn't suitable to run a file share witness.

NOTE
If your Active Directory/DNS service runs only on-premises, don't configure your file share witness on the Active
Directory/DNS Windows operating system that is running on-premises. Network latency between cluster nodes
running in Azure and Active Directory/DNS on-premises might be too large and cause connectivity issues. Be sure to
configure the file share witness on an Azure virtual machine that is running close to the cluster node.

The quorum drive needs at least 1,024 MB of free space. We recommend 2,048 MB of free space for the
quorum drive.
2. Add the cluster name object.
Figure 30: Assign the permissions on the share for the cluster name object
Be sure that the permissions include the authority to change data in the share for the cluster name object (in
our example, pr1-ascs-vir$).
3. To add the cluster name object to the list, select Add. Change the filter to check for computer objects, in
addition to those shown in Figure 31.

Figure 31: Change the Object Types to include computers

Figure 32: Select the Computers check box


4. Enter the cluster name object as shown in Figure 31. Because the record has already been created, you can
change the permissions, as shown in Figure 30.
5. Select the Security tab of the share, and then set more detailed permissions for the cluster name object.

Figure 33: Set the security attributes for the cluster name object on the file share quorum
Se t t h e fi l e sh a r e w i t n e ss q u o r u m i n F a i l o v e r C l u st e r M a n a g e r

1. Open the Configure Quorum Setting Wizard.

Figure 34: Start the Configure Cluster Quorum Setting Wizard


2. On the Select Quorum Configuration page, select Select the quorum witness.
Figure 35: Quorum configurations you can choose from
3. On the Select Quorum Witness page, select Configure a file share witness.

Figure 36: Select the file share witness


4. Enter the UNC path to the file share (in our example, \domcontr-0\FSW). To see a list of the changes you can
make, select Next.

Figure 37: Define the file share location for the witness share
5. Select the changes you want, and then select Next. You need to successfully reconfigure the cluster
configuration as shown in Figure 38.

Figure 38: Confirmation that you've reconfigured the cluster


After installing the Windows Failover Cluster successfully, changes need to be made to some thresholds to adapt
failover detection to conditions in Azure. The parameters to be changed are documented in this blog:
https://blogs.msdn.microsoft.com/clustering/2012/11/21/tuning-failover-cluster-network-thresholds/ . Assuming
that your two VMs that build the Windows Cluster Configuration for ASCS/SCS are in the same SubNet, the
following parameters need to be changed to these values:
SameSubNetDelay = 2
SameSubNetThreshold = 15
These settings were tested with customers and provided a good compromise to be resilient enough on the one
side. On the other hand those settings were providing fast enough failover in real error conditions on SAP software
or node/VM failure.
Install SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk
You now have a working Windows Server Failover Clustering configuration in Azure. But, to install an SAP
ASCS/SCS instance, you need a shared disk resource. You cannot create the shared disk resources you need in
Azure. SIOS DataKeeper Cluster Edition is a third-party solution you can use to create shared disk resources.
Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:
Adding the .NET Framework 3.5
Installing SIOS DataKeeper
Setting up SIOS DataKeeper
Add the .NET Framework 3.5
The Microsoft .NET Framework 3.5 isn't automatically activated or installed on Windows Server 2012 R2. Because
SIOS DataKeeper requires the .NET Framework to be on all nodes that you install DataKeeper on, you must install
the .NET Framework 3.5 on the guest operating system of all virtual machines in the cluster.
There are two ways to add the .NET Framework 3.5:
Use the Add Roles and Features Wizard in Windows as shown in Figure 39.
Figure 39: Install the .NET Framework 3.5 by using the Add Roles and Features Wizard

Figure 40: Installation progress bar when you install the .NET Framework 3.5 by using the Add Roles and
Features Wizard
Use the command-line tool dism.exe. For this type of installation, you need to access the SxS directory on
the Windows installation media. At an elevated command prompt, type:

Dism /online /enable-feature /featurename:NetFx3 /All /Source:installation_media_drive:\sources\sxs


/LimitAccess

Install SIOS DataKeeper


Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with SIOS
DataKeeper, create a synced mirror and then simulate cluster shared storage.
Before you install the SIOS software, create the domain user DataKeeperSvc.
NOTE
Add the DataKeeperSvc user to the Local Administrator group on both cluster nodes.

To install SIOS DataKeeper:


1. Install the SIOS software on both cluster nodes.

Figure 41: First page of the SIOS DataKeeper installation


2. In the dialog box shown in Figure 42, select Yes.

Figure 42: DataKeeper informs you that a service will be disabled


3. In the dialog box shown in Figure 43, we recommend that you select Domain or Server account.
Figure 43: User selection for SIOS DataKeeper
4. Enter the domain account user name and passwords that you created for SIOS DataKeeper.

Figure 44: Enter the domain user name and password for the SIOS DataKeeper installation
5. Install the license key for your SIOS DataKeeper instance as shown in Figure 45.
Figure 45: Enter your SIOS DataKeeper license key
6. When prompted, restart the virtual machine.
Set up SIOS DataKeeper
After you install SIOS DataKeeper on both nodes, you need to start the configuration. The goal of the configuration
is to have synchronous data replication between the additional VHDs attached to each of the virtual machines.
1. Start the DataKeeper Management and Configuration tool, and then select Connect Server. (In Figure 46,
this option is circled in red.)

Figure 46: SIOS DataKeeper Management and Configuration tool


2. Enter the name or TCP/IP address of the first node the Management and Configuration tool should connect
to, and, in a second step, the second node.
Figure 47: Insert the name or TCP/IP address of the first node the Management and Configuration tool
should connect to, and in a second step, the second node
3. Create the replication job between the two nodes.

Figure 48: Create a replication job


A wizard guides you through the process of creating a replication job.
4. Define the name, TCP/IP address, and disk volume of the source node.

Figure 49: Define the name of the replication job

Figure 50: Define the base data for the node, which should be the current source node
5. Define the name, TCP/IP address, and disk volume of the target node.

Figure 51: Define the base data for the node, which should be the current target node
6. Define the compression algorithms. In our example, we recommend that you compress the replication
stream. Especially in resynchronization situations, the compression of the replication stream dramatically
reduces resynchronization time. Note that compression uses the CPU and RAM resources of a virtual
machine. As the compression rate increases, so does the volume of CPU resources used. You also can adjust
this setting later.
7. Another setting you need to check is whether the replication occurs asynchronously or synchronously.
When you protect SAP ASCS/SCS configurations, you must use synchronous replication.

Figure 52: Define replication details


8. Define whether the volume that is replicated by the replication job should be represented to a Windows
Server Failover Clustering cluster configuration as a shared disk. For the SAP ASCS/SCS configuration,
select Yes so that the Windows cluster sees the replicated volume as a shared disk that it can use as a
cluster volume.
Figure 53: Select Yes to set the replicated volume as a cluster volume
After the volume is created, the DataKeeper Management and Configuration tool shows that the replication
job is active.

Figure 54: DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 55.

Figure 55: Failover Cluster Manager shows the disk that DataKeeper replicated

Install the SAP NetWeaver system


We wont describe the DBMS setup because setups vary depending on the DBMS system you use. However, we
assume that high-availability concerns with the DBMS are addressed with the functionalities the different DBMS
vendors support for Azure. For example, Always On or database mirroring for SQL Server, and Oracle Data Guard
for Oracle databases. In the scenario we use in this article, we didn't add more protection to the DBMS.
There are no special considerations when different DBMS services interact with this kind of clustered SAP
ASCS/SCS configuration in Azure.
NOTE
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical.
The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover
cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can assume
that all other parts are the same.

Install SAP with a high-availability ASCS/SCS instance

IMPORTANT
Be sure not to place your page file on DataKeeper mirrored volumes. DataKeeper does not support mirrored volumes. You
can leave your page file on the temporary drive D of an Azure virtual machine, which is the default. If it's not already there,
move the Windows page file to drive D of your Azure virtual machine.

Installing SAP with a high-availability ASCS/SCS instance involves these tasks:


Creating a virtual host name for the clustered SAP ASCS/SCS instance
Installing the SAP first cluster node
Modifying the SAP profile of the ASCS/SCS instance
Adding a probe port
Opening the Windows firewall probe port
Create a virtual host name for the clustered SAP ASCS/SCS instance
1. In the Windows DNS manager, create a DNS entry for the virtual host name of the ASCS/SCS instance.

IMPORTANT
The IP address that you assign to the virtual host name of the ASCS/SCS instance must be the same as the IP
address that you assigned to Azure Load Balancer (<SID>-lb-ascs).

The IP address of the virtual SAP ASCS/SCS host name (pr1-ascs-sap) is the same as the IP address of
Azure Load Balancer (pr1-lb-ascs).

Figure 56: Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address
2. To define the IP address assigned to the virtual host name, select DNS Manager > Domain.
Figure 57: New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration
Install the SAP first cluster node
1. Execute the first cluster node option on cluster node A. For example, on the pr1-ascs-0 host.
2. To keep the default ports for the Azure internal load balancer, select:
ABAP system: ASCS instance number 00
Java system: SCS instance number 01
ABAP+Java system: ASCS instance number 00 and SCS instance number 01
To use instance numbers other than 00 for the ABAP ASCS instance and 01 for the Java SCS instance, first
you need to change the Azure internal load balancer default load balancing rules, described in Change the
ASCS/SCS default load balancing rules for the Azure internal load balancer.
The next few tasks aren't described in the standard SAP installation documentation.

NOTE
The SAP installation documentation describes how to install the first ASCS/SCS cluster node.

Modify the SAP profile of the ASCS/SCS instance


You need to add a new profile parameter. The profile parameter prevents connections between SAP work
processes and the enqueue server from closing when they are idle for too long. We mentioned the problem
scenario in Add registry entries on both cluster nodes of the SAP ASCS/SCS instance. In that section, we also
introduced two changes to some basic TCP/IP connection parameters. In a second step, you need to set the
enqueue server to send a keep_alive signal so that the connections don't hit the Azure internal load balancer's
idle threshold.
To modify the SAP profile of the ASCS/SCS instance:
1. Add this profile parameter to the SAP ASCS/SCS instance profile:

enque/encni/set_so_keepalive = true

In our example, the path is:


<ShareDisk>:\usr\sap\PR1\SYS\profile\PR1_ASCS00_pr1-ascs-sap

For example, to the SAP SCS instance profile and corresponding path:
<ShareDisk>:\usr\sap\PR1\SYS\profile\PR1_SCS01_pr1-ascs-sap

2. To apply the changes, restart the SAP ASCS /SCS instance.


Add a probe port
Use the internal load balancer's probe functionality to make the entire cluster configuration work with Azure Load
Balancer. The Azure internal load balancer usually distributes the incoming workload equally between participating
virtual machines. However, this won't work in some cluster configurations because only one instance is active. The
other instance is passive and cant accept any of the workload. A probe functionality helps when the Azure internal
load balancer assigns work only to an active instance. With the probe functionality, the internal load balancer can
detect which instances are active, and then target only the instance with the workload.
To add a probe port:
1. Check the current ProbePort setting by running the following PowerShell command. Execute it from within
one of the virtual machines in the cluster configuration.

$SAPSID = "PR1" # SAP <SID>

$SAPNetworkIPClusterName = "SAP $SAPSID IP"


Get-ClusterResource $SAPNetworkIPClusterName | Get-ClusterParameter

2. Define a probe port. The default probe port number is 0. In our example, we use probe port 62000.

Figure 58: The default cluster configuration probe port is 0


The port number is defined in SAP Azure Resource Manager templates. You can assign the port number in
PowerShell.
To set a new ProbePort value for the SAP <SID> IP cluster resource, run the following PowerShell script.
Update the PowerShell variables for your environment. After the script runs, you'll be prompted to restart
the SAP cluster group to activate the changes.
$SAPSID = "PR1" # SAP <SID>
$ProbePort = 62000 # ProbePort of the Azure Internal Load Balancer

Clear-Host
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
$SAPIPResourceClusterParameters = Get-ClusterResource $SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq
"OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "ProbePort" }).Value

$var = Get-ClusterResource | Where-Object { $_.name -eq $SAPIPresourceName }

Write-Host "Current configuration parameters for SAP IP cluster resource '$SAPIPresourceName' are:" -
ForegroundColor Cyan
Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter

Write-Host
Write-Host "Current probe port property of the SAP cluster resource '$SAPIPresourceName' is
'$OldProbePort'." -ForegroundColor Cyan
Write-Host
Write-Host "Setting the new probe port property of the SAP cluster resource '$SAPIPresourceName' to
'$ProbePort' ..." -ForegroundColor Cyan
Write-Host

$var | Set-ClusterParameter -Multiple


@{"Address"=$IPAddress;"ProbePort"=$ProbePort;"Subnetmask"=$SubnetMask;"Network"=$NetworkName;"Override
AddressMatch"=$OverrideAddressMatch;"EnableDhcp"=$EnableDhcp}

Write-Host

$ActivateChanges = Read-Host "Do you want to take restart SAP cluster role '$SAPClusterRoleName', to
activate the changes (yes/no)?"

if($ActivateChanges -eq "yes"){


Write-Host
Write-Host "Activating changes..." -ForegroundColor Cyan

Write-Host
write-host "Taking SAP cluster IP resource '$SAPIPresourceName' offline ..." -ForegroundColor Cyan
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5

Write-Host "Starting SAP cluster role '$SAPClusterRoleName' ..." -ForegroundColor Cyan


Start-ClusterGroup -Name $SAPClusterRoleName

Write-Host "New ProbePort parameter is active." -ForegroundColor Green


Write-Host

Write-Host "New configuration parameters for SAP IP cluster resource '$SAPIPresourceName':" -


ForegroundColor Cyan
Write-Host
Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter
}else
{
Write-Host "Changes are not activated."
}

After you bring the SAP <SID> cluster role online, verify that ProbePort is set to the new value.
$SAPSID = "PR1" # SAP <SID>

$SAPNetworkIPClusterName = "SAP $SAPSID IP"


Get-ClusterResource $SAPNetworkIPClusterName | Get-ClusterParameter

Figure 59: Probe the cluster port after you set the new value
Open the Windows firewall probe port
You need to open a Windows firewall probe port on both cluster nodes. Use the following script to open a
Windows firewall probe port. Update the PowerShell variables for your environment.

$ProbePort = 62000 # ProbePort of the Azure Internal Load Balancer

New-NetFirewallRule -Name AzureProbePort -DisplayName "Rule for Azure Probe Port" -Direction Inbound -Action
Allow -Protocol TCP -LocalPort $ProbePort

The ProbePort is set to 62000. Now you can access the file share \\ascsha-clsap\sapmnt from other hosts, such
as from ascsha-dbas.
Install the database instance
To install the database instance, follow the process described in the SAP installation documentation.
Install the second cluster node
To install the second cluster, follow the steps in the SAP installation guide.
Change the start type of the SAP ERS Windows service instance
Change the start type of the SAP ERS Windows service to Automatic (Delayed Start) on both cluster nodes.
Figure 60: Change the service type for the SAP ERS instance to delayed automatic
Install the SAP Primary Application Server
Install the Primary Application Server (PAS) instance <SID>-di-0 on the virtual machine that you've designated to
host the PAS. There are no dependencies on Azure or DataKeeper-specific settings.
Install the SAP Additional Application Server
Install an SAP Additional Application Server (AAS) on all the virtual machines that you've designated to host an
SAP Application Server instance. For example, on <SID>-di-1 to <SID>-di-<n>.

NOTE
This finishes the installation of a high-availability SAP NetWeaver system. Next, proceed with failover testing.

Test the SAP ASCS/SCS instance failover and SIOS replication


It's easy to test and monitor an SAP ASCS/SCS instance failover and SIOS disk replication by using Failover Cluster
Manager and the SIOS DataKeeper Management and Configuration tool.
SAP ASCS/SCS instance is running on cluster node A
The SAP PR1 cluster group is running on cluster node A. For example, on pr1-ascs-0. Assign the shared disk drive
S, which is part of the SAP PR1 cluster group, and which the ASCS/SCS instance uses, to cluster node A.
Figure 61: Failover Cluster Manager: The SAP <SID> cluster group is running on cluster node A
In the SIOS DataKeeper Management and Configuration tool, you can see that the shared disk data is
synchronously replicated from the source volume drive S on cluster node A to the target volume drive S on cluster
node B. For example, it's replicated from pr1-ascs-0 [10.0.0.40] to pr1-ascs-1 [10.0.0.41].

Figure 62: In SIOS DataKeeper, replicate the local volume from cluster node A to cluster node B
Failover from node A to node B
1. Choose one of these options to initiate a failover of the SAP <SID> cluster group from cluster node A to
cluster node B:
Use Failover Cluster Manager
Use Failover Cluster PowerShell
$SAPSID = "PR1" # SAP <SID>

$SAPClusterGroup = "SAP $SAPSID"


Move-ClusterGroup -Name $SAPClusterGroup

2. Restart cluster node A within the Windows guest operating system (this initiates an automatic failover of the
SAP <SID> cluster group from node A to node B).
3. Restart cluster node A from the Azure portal (this initiates an automatic failover of the SAP <SID> cluster group
from node A to node B).
4. Restart cluster node A by using Azure PowerShell (this initiates an automatic failover of the SAP <SID>
cluster group from node A to node B).
After failover, the SAP <SID> cluster group is running on cluster node B. For example, it's running on pr1-
ascs-1.

Figure 63: In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster node B
The shared disk is now mounted on cluster node B. SIOS DataKeeper is replicating data from source volume
drive S on cluster node B to target volume drive S on cluster node A. For example, it's replicating from pr1-
ascs-1 [10.0.0.41] to pr1-ascs-0 [10.0.0.40].
Figure 64: SIOS DataKeeper replicates the local volume from cluster node B to cluster node A
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications
7/31/2017 32 min to read Edit Online

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02 and SAP System ID NWS is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
converged template with SAP system ID NWS to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA SR Performance Optimized Scenario
The guide contains all required information to set up SAP HANA System Replication on-premises. Use this guide
as a baseline.
Highly Available NFS Storage with DRBD and Pacemaker The guide contains all required information to set up a
highly available NFS server. Use this guide as a baseline.

Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS and the SAP HANA database use
virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
following list shows the configuration of the load balancer.
NFS Server
Frontend configuration
IP address 10.0.0.4
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
Probe Port
Port 61000
Loadbalancing rules
2049 TCP
2049 UDP
(A )SCS
Frontend configuration
IP address 10.0.0.10
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Probe Port
Port 620<nr>
Loadbalancing rules
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr>13 TCP
5<nr>14 TCP
5<nr>16 TCP
ERS
Frontend configuration
IP address 10.0.0.11
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Probe Port
Port 621<nr>
Loadbalancing rules
33<nr> TCP
5<nr>13 TCP
5<nr>14 TCP
5<nr>16 TCP
SAP HANA
Frontend configuration
IP address 10.0.0.12
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the HANA cluster
Probe Port
Port 625<nr>
Loadbalancing rules
3<nr>15 TCP
3<nr>17 TCP

Setting up a highly available NFS server


Deploying Linux
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can
use to deploy new virtual machines. You can use one of the quick start templates on github to deploy all required
resources. The template deploys the virtual machines, the load balancer, availability set etc. Follow these steps to
deploy the template:
1. Open the SAP file server template in the Azure portal
2. Enter the following parameters
a. Resource Prefix
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Os Type
Select one of the Linux distributions. For this example, select SLES 12
c. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
d. Subnet Id
The ID of the subnet to which the virtual machines should be connected to. Leave empty if you want to
create a new virtual network or select the subnet of your VPN or Express Route virtual network to
connect the virtual machine to your on-premises network. The ID usually looks like
/subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet
name>
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Update SLES

sudo zypper update

2. [1] Enable ssh access

sudo ssh-keygen -tdsa

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

3. [2] Enable ssh access

sudo ssh-keygen -tdsa

# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

4. [1] Enable ssh access

# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys

5. [A] Install HA extension


sudo zypper install sle-ha-release fence-agents

6. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nws-nfs

7. [1] Install Cluster

sudo ha-cluster-init

# Do you want to continue anyway? [y/N] -> y


# Network address to bind to (for example: 192.168.1.0) [10.79.227.0] -> ENTER
# Multicast address (for example: 239.x.x.x) [239.174.218.125] -> ENTER
# Multicast port [5405] -> ENTER
# Do you wish to use SBD? [y/N] -> N
# Do you wish to configure an administration IP? [y/N] -> N

8. [2] Add node to cluster

sudo ha-cluster-join

# WARNING: NTP is not configured to start at system boot.


# WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a
watchdog.
# Do you want to continue anyway? [y/N] -> y
# IP address or hostname of existing node (for example: 192.168.1.1) [] -> IP address of node 1 for
example 10.0.0.10
# /root/.ssh/id_dsa already exists - overwrite? [y/N] N

9. [A] Change hacluster password to the same password

sudo passwd hacluster

10. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.

sudo vi /etc/corosync/corosync.conf

Add the following bold content to the file.


[...]
interface {
[...]
}
transport: udpu
}
nodelist {
node {
# IP address of prod-nfs-0
ring0_addr:10.0.0.5
}
node {
# IP address of prod-nfs-1
ring0_addr:10.0.0.6
}
}
logging {
[...]

Then restart the corosync service

sudo service corosync restart

11. [A] Install drbd components

sudo zypper install drbd drbd-kmp-default drbd-utils

12. [A] Create a partition for the drbd device

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/sdc'

13. [A] Create LVM configurations

sudo pvcreate /dev/sdc1


sudo vgcreate vg_NFS /dev/sdc1
sudo lvcreate -l 100%FREE -n NWS vg_NFS

14. [A] Create the NFS drbd device

sudo vi /etc/drbd.d/NWS_nfs.res

Insert the configuration for the new drbd device and exit
resource NWS_nfs {
protocol C;
disk {
on-io-error pass_on;
}
on prod-nfs-0 {
address 10.0.0.5:7790;
device /dev/drbd0;
disk /dev/vg_NFS/NWS;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg_NFS/NWS;
meta-disk internal;
}
}

Create the drbd device and start it

sudo drbdadm create-md NWS_nfs


sudo drbdadm up NWS_nfs

15. [1] Skip initial synchronization

sudo drbdadm new-current-uuid --clear-bitmap NWS_nfs

16. [1] Set the primary node

sudo drbdadm primary --force NWS_nfs

17. [1] Wait until the new drbd devices are synchronized

sudo cat /proc/drbd

# version: 8.4.6 (api:1/proto:86-101)


# GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by abuild@sheep14, 2016-05-09 23:14:56
# 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
# ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

18. [1] Create file systems on the drbd devices

sudo mkfs.xfs /dev/drbd0

Configure Cluster Framework


1. [1] Change the default settings
sudo crm configure

crm(live)configure# rsc_defaults resource-stickiness="1"

crm(live)configure# commit
crm(live)configure# exit

2. [1] Add the NFS drbd device to the cluster configuration

sudo crm configure

crm(live)configure# primitive drbd_NWS_nfs \


ocf:linbit:drbd \
params drbd_resource="NWS_nfs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

crm(live)configure# ms ms-drbd_NWS_nfs drbd_NWS_nfs \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" interleave="true"

crm(live)configure# commit
crm(live)configure# exit

3. [1] Create the NFS server

sudo crm configure

crm(live)configure# primitive nfsserver \


systemd:nfs-server \
op monitor interval="30s"

crm(live)configure# clone cl-nfsserver nfsserver interleave="true"

crm(live)configure# commit
crm(live)configure# exit

4. [1] Create the NFS File System resources


sudo crm configure

crm(live)configure# primitive fs_NWS_sapmnt \


ocf:heartbeat:Filesystem \
params device=/dev/drbd0 \
directory=/srv/nfs/NWS \
fstype=xfs \
op monitor interval="10s"

crm(live)configure# group g-NWS_nfs fs_NWS_sapmnt

crm(live)configure# order o-NWS_drbd_before_nfs inf: \


ms-drbd_NWS_nfs:promote g-NWS_nfs:start

crm(live)configure# colocation col-NWS_nfs_on_drbd inf: \


g-NWS_nfs ms-drbd_NWS_nfs:Master

crm(live)configure# commit
crm(live)configure# exit

5. [1] Create the NFS exports

sudo mkdir /srv/nfs/NWS/sidsys


sudo mkdir /srv/nfs/NWS/sapmntsid
sudo mkdir /srv/nfs/NWS/trans

sudo crm configure

crm(live)configure# primitive exportfs_NWS \


ocf:heartbeat:exportfs \
params directory="/srv/nfs/NWS" \
options="rw,no_root_squash" \
clientspec="*" fsid=0 \
wait_for_leasetime_on_stop=true \
op monitor interval="30s"

crm(live)configure# modgroup g-NWS_nfs add exportfs_NWS

crm(live)configure# commit
crm(live)configure# exit

6. [1] Create a virtual IP resource and health-probe for the internal load balancer

sudo crm configure

crm(live)configure# primitive vip_NWS_nfs IPaddr2 \


params ip=10.0.0.4 cidr_netmask=24 \
op monitor interval=10 timeout=20

crm(live)configure# primitive nc_NWS_nfs anything \


params binfile="/usr/bin/nc" cmdline_options="-l -k 61000" \
op monitor timeout=20s interval=10 depth=0

crm(live)configure# modgroup g-NWS_nfs add nc_NWS_nfs


crm(live)configure# modgroup g-NWS_nfs add vip_NWS_nfs

crm(live)configure# commit
crm(live)configure# exit

Create STONITH device


The STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a
Service Principal.
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
Go to Properties and write down the Directory Id. This is the tenant id.
3. Click App registrations
4. Click Add
5. Enter a Name, select Application Type "Web app/API", enter a sign-on URL (for example http://localhost) and
click Create
6. The sign-on URL is not used and can be any valid URL
7. Select the new App and click Keys in the Settings tab
8. Enter a description for a new key, select "Never expires" and click Save
9. Write down the Value. It is used as the password for the Service Principal
10. Write down the Application Id. It is used as the username (login id in the steps below) of the Service Principal
The Service Principal does not have permissions to access your Azure resources by default. You need to give the
Service Principal permissions to start and stop (deallocate) all virtual machines of the cluster.
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine
4. Click Access control (IAM)
5. Click Add
6. Select the role Owner
7. Enter the name of the application you created above
8. Click OK
[1] Create the STONITH devices
After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.

sudo crm configure

# replace the bold string with your subscription id, resource group, tenant id, service principal id and
password

crm(live)configure# primitive rsc_st_azure_1 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login id"
passwd="password"

crm(live)configure# primitive rsc_st_azure_2 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login id"
passwd="password"

crm(live)configure# colocation col_st_azure -2000: rsc_st_azure_1:Started rsc_st_azure_2:Started

crm(live)configure# commit
crm(live)configure# exit

[1] Enable the use of a STONITH device

sudo crm configure property stonith-enabled=true


Setting up (A)SCS
Deploying Linux
The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 that you can
use to deploy new virtual machines. The marketplace image contains the resource agent for SAP NetWeaver.
You can use one of the quick start templates on github to deploy all required resources. The template deploys the
virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the ASCS/SCS Multi SID template or the converged template on the Azure portal The ASCS/SCS template
only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only) instances whereas
the converged template also creates the load-balancing rules for a database (for example Microsoft SQL Server
or SAP HANA). If you plan to install an SAP NetWeaver based system and you also want to install the database
on the same machines, use the converged template.
2. Enter the following parameters
a. Resource Prefix (ASCS/SCS Multi SID template only)
Enter the prefix you want to use. The value is used as a prefix for the resources that are deployed.
b. Sap System Id (converged template only)
Enter the SAP system Id of the SAP system you want to install. The Id is used as a prefix for the resources
that are deployed.
c. Stack Type
Select the SAP NetWeaver stack type
d. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
e. Db Type
Select HANA
f. Sap System Size
The amount of SAPS the new system provides. If you are not sure how many SAPS the system requires,
please ask your SAP Technology Partner or System Integrator
g. System Availability
Select HA
h. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
i. Subnet Id
The ID of the subnet to which the virtual machines should be connected to. Leave empty if you want to
create a new virtual network or select the same subnet that you used or created as part of the NFS server
deployment. The ID usually looks like /subscriptions/<subscription id>/resourceGroups/<resource
group name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] -
only applicable to node 2.
1. [A] Update SLES

sudo zypper update

2. [1] Enable ssh access


sudo ssh-keygen -tdsa

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

3. [2] Enable ssh access

sudo ssh-keygen -tdsa

# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys

# Enter file in which to save the key (/root/.ssh/id_dsa): -> ENTER


# Enter passphrase (empty for no passphrase): -> ENTER
# Enter same passphrase again: -> ENTER

# copy the public key


sudo cat /root/.ssh/id_dsa.pub

4. [1] Enable ssh access

# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys

5. [A] Install HA extension

sudo zypper install sle-ha-release fence-agents

6. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration, that is described in this
article. You can check, if the patch is already installed with the following command

sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

<parameter name="IS_ERS" unique="0" required="0">

If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page
# example for patch for SLES 12 SP1
sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

7. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nws-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.10 nws-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.11 nws-ers
# IP address of the load balancer frontend configuration for database
10.0.0.12 nws-db

8. [1] Install Cluster

sudo ha-cluster-init

# Do you want to continue anyway? [y/N] -> y


# Network address to bind to (for example: 192.168.1.0) [10.79.227.0] -> ENTER
# Multicast address (for example: 239.x.x.x) [239.174.218.125] -> ENTER
# Multicast port [5405] -> ENTER
# Do you wish to use SBD? [y/N] -> N
# Do you wish to configure an administration IP? [y/N] -> N

9. [2] Add node to cluster

sudo ha-cluster-join

# WARNING: NTP is not configured to start at system boot.


# WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a
watchdog.
# Do you want to continue anyway? [y/N] -> y
# IP address or hostname of existing node (for example: 192.168.1.1) [] -> IP address of node 1 for
example 10.0.0.10
# /root/.ssh/id_dsa already exists - overwrite? [y/N] N

10. [A] Change hacluster password to the same password

sudo passwd hacluster

11. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.
sudo vi /etc/corosync/corosync.conf

Add the following bold content to the file.

[...]
interface {
[...]
}
transport: udpu
}
nodelist {
node {
# IP address of nws-cl-0
ring0_addr: 10.0.0.14
}
node {
# IP address of nws-cl-1
ring0_addr: 10.0.0.13
}
}
logging {
[...]

Then restart the corosync service

sudo service corosync restart

12. [A] Install drbd components

sudo zypper install drbd drbd-kmp-default drbd-utils

13. [A] Create a partition for the drbd device

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/sdc'

14. [A] Create LVM configurations

sudo pvcreate /dev/sdc1


sudo vgcreate vg_NWS /dev/sdc1
sudo lvcreate -l 50%FREE -n NWS_ASCS vg_NWS
sudo lvcreate -l 50%FREE -n NWS_ERS vg_NWS

15. [A] Create the SCS drbd device

sudo vi /etc/drbd.d/NWS_ascs.res

Insert the configuration for the new drbd device and exit
resource NWS_ascs {
protocol C;
disk {
on-io-error pass_on;
}
on nws-cl-0 {
address 10.0.0.14:7791;
device /dev/drbd0;
disk /dev/vg_NWS/NWS_ASCS;
meta-disk internal;
}
on nws-cl-1 {
address 10.0.0.13:7791;
device /dev/drbd0;
disk /dev/vg_NWS/NWS_ASCS;
meta-disk internal;
}
}

Create the drbd device and start it

sudo drbdadm create-md NWS_ascs


sudo drbdadm up NWS_ascs

16. [A] Create the ERS drbd device

sudo vi /etc/drbd.d/NWS_ers.res

Insert the configuration for the new drbd device and exit

resource NWS_ers {
protocol C;
disk {
on-io-error pass_on;
}
on nws-cl-0 {
address 10.0.0.14:7792;
device /dev/drbd1;
disk /dev/vg_NWS/NWS_ERS;
meta-disk internal;
}
on nws-cl-1 {
address 10.0.0.13:7792;
device /dev/drbd1;
disk /dev/vg_NWS/NWS_ERS;
meta-disk internal;
}
}

Create the drbd device and start it

sudo drbdadm create-md NWS_ers


sudo drbdadm up NWS_ers
17. [1] Skip initial synchronization

sudo drbdadm new-current-uuid --clear-bitmap NWS_ascs


sudo drbdadm new-current-uuid --clear-bitmap NWS_ers

18. [1] Set the primary node

sudo drbdadm primary --force NWS_ascs


sudo drbdadm primary --force NWS_ers

19. [1] Wait until the new drbd devices are synchronized

sudo cat /proc/drbd

# version: 8.4.6 (api:1/proto:86-101)


# GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by abuild@sheep14, 2016-05-09 23:14:56
# 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
# ns:93991268 nr:0 dw:93991268 dr:93944920 al:383 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
# 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
# ns:6047920 nr:0 dw:6047920 dr:6039112 al:34 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
# 2: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
# ns:5142732 nr:0 dw:5142732 dr:5133924 al:30 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

20. [1] Create file systems on the drbd devices

sudo mkfs.xfs /dev/drbd0


sudo mkfs.xfs /dev/drbd1

Configure Cluster Framework


[1] Change the default settings

sudo crm configure

crm(live)configure# rsc_defaults resource-stickiness="1"

crm(live)configure# commit
crm(live)configure# exit

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

sudo mkdir -p /sapmnt/NWS


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NWS/SYS

sudo chattr +i /sapmnt/NWS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NWS/SYS
2. [A] Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NWS -nfsvers=4,nosymlink,sync nws-nfs:/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nws-nfs:/trans
/usr/sap/NWS/SYS -nfsvers=4,nosymlink,sync nws-nfs:/sidsys

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the internal load balancer
sudo crm node standby nws-cl-1
sudo crm configure

crm(live)configure# primitive drbd_NWS_ASCS \


ocf:linbit:drbd \
params drbd_resource="NWS_ascs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

crm(live)configure# ms ms-drbd_NWS_ASCS drbd_NWS_ASCS \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true"

crm(live)configure# primitive fs_NWS_ASCS \


ocf:heartbeat:Filesystem \
params device=/dev/drbd0 \
directory=/usr/sap/NWS/ASCS00 \
fstype=xfs \
op monitor interval="10s"

crm(live)configure# primitive vip_NWS_ASCS IPaddr2 \


params ip=10.0.0.10 cidr_netmask=24 \
op monitor interval=10 timeout=20

crm(live)configure# primitive nc_NWS_ASCS anything \


params binfile="/usr/bin/nc" cmdline_options="-l -k 62000" \
op monitor timeout=20s interval=10 depth=0

crm(live)configure# group g-NWS_ASCS nc_NWS_ASCS vip_NWS_ASCS fs_NWS_ASCS \


meta resource-stickiness=3000

crm(live)configure# order o-NWS_drbd_before_ASCS inf: \


ms-drbd_NWS_ASCS:promote g-NWS_ASCS:start

crm(live)configure# colocation col-NWS_ASCS_on_drbd inf: \


ms-drbd_NWS_ASCS:Master g-NWS_ASCS

crm(live)configure# commit
crm(live)configure# exit

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo crm_mon -r

# Node nws-cl-1: standby


# Online: [ nws-cl-0 ]
#
# Full list of resources:
#
# Master/Slave Set: ms-drbd_NWS_ASCS [drbd_NWS_ASCS]
# Masters: [ nws-cl-0 ]
# Stopped: [ nws-cl-1 ]
# Resource Group: g-NWS_ASCS
# nc_NWS_ASCS (ocf::heartbeat:anything): Started nws-cl-0
# vip_NWS_ASCS (ocf::heartbeat:IPaddr2): Started nws-cl-0
# fs_NWS_ASCS (ocf::heartbeat:Filesystem): Started nws-cl-0

2. [1] Install SAP NetWeaver ASCS


Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ASCS for example nws-ascs, 10.0.0.10 and the instance
number that you used for the probe of the load balancer for example 00.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [1] Create a virtual IP resource and health-probe for the internal load balancer

sudo crm node standby nws-cl-0


sudo crm node online nws-cl-1
sudo crm configure

crm(live)configure# primitive drbd_NWS_ERS \


ocf:linbit:drbd \
params drbd_resource="NWS_ers" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

crm(live)configure# ms ms-drbd_NWS_ERS drbd_NWS_ERS \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true"

crm(live)configure# primitive fs_NWS_ERS \


ocf:heartbeat:Filesystem \
params device=/dev/drbd1 \
directory=/usr/sap/NWS/ERS02 \
fstype=xfs \
op monitor interval="10s"

crm(live)configure# primitive vip_NWS_ERS IPaddr2 \


params ip=10.0.0.11 cidr_netmask=24 \
op monitor interval=10 timeout=20

crm(live)configure# primitive nc_NWS_ERS anything \


params binfile="/usr/bin/nc" cmdline_options="-l -k 62102" \
op monitor timeout=20s interval=10 depth=0

crm(live)configure# group g-NWS_ERS nc_NWS_ERS vip_NWS_ERS fs_NWS_ERS

crm(live)configure# order o-NWS_drbd_before_ERS inf: \


ms-drbd_NWS_ERS:promote g-NWS_ERS:start

crm(live)configure# colocation col-NWS_ERS_on_drbd inf: \


ms-drbd_NWS_ERS:Master g-NWS_ERS

crm(live)configure# commit
# WARNING: Resources nc_NWS_ASCS,nc_NWS_ERS,nc_NWS_nfs violate uniqueness for parameter "binfile":
"/usr/bin/nc"
# Do you still want to commit (y/n)? y

crm(live)configure# exit

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r

# Node nws-cl-0: standby


# Online: [ nws-cl-1 ]
#
# Full list of resources:
#
# Master/Slave Set: ms-drbd_NWS_ASCS [drbd_NWS_ASCS]
# Masters: [ nws-cl-1 ]
# Stopped: [ nws-cl-0 ]
# Resource Group: g-NWS_ASCS
# nc_NWS_ASCS (ocf::heartbeat:anything): Started nws-cl-1
# vip_NWS_ASCS (ocf::heartbeat:IPaddr2): Started nws-cl-1
# fs_NWS_ASCS (ocf::heartbeat:Filesystem): Started nws-cl-1
# Master/Slave Set: ms-drbd_NWS_ERS [drbd_NWS_ERS]
# Masters: [ nws-cl-1 ]
# Stopped: [ nws-cl-0 ]
# Resource Group: g-NWS_ERS
# nc_NWS_ERS (ocf::heartbeat:anything): Started nws-cl-1
# vip_NWS_ERS (ocf::heartbeat:IPaddr2): Started nws-cl-1
# fs_NWS_ERS (ocf::heartbeat:Filesystem): Started nws-cl-1

4. [2] Install SAP NetWeaver ERS


Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address
of the load balancer frontend configuration for the ERS for example nws-ers, 10.0.0.11 and the instance
number that you used for the probe of the load balancer for example 02.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

NOTE
Please use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.

5. [1] Adapt the ASCS/SCS and ERS instance profiles


ASCS/SCS profile

sudo vi /sapmnt/NWS/profile/NWS_ASCS00_nws-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter


enque/encni/set_so_keepalive = true

ERS profile
sudo vi /sapmnt/NWS/profile/NWS_ERS02_nws-ers

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

6. [A] Configure Keep Alive


The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a
software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To
prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS profile and change the Linux
system settings. Please read SAP Note 1410736 for more information.
The ASCS/SCS profile parameter enque/encni/set_so_keepalive was already added in the last step.

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=120

7. [A] Configure the SAP users after the installation

# Add sidadm to the haclient group


sudo usermod -aG haclient nwsadm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh nws-cl-1 "cat >>/usr/sap/sapservices"


sudo ssh nws-cl-1 "cat /usr/sap/sapservices" | grep ERS02 | sudo tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources


sudo crm configure property maintenance-mode="true"

sudo crm configure

crm(live)configure# primitive rsc_sap_NWS_ASCS00 SAPInstance \


operations $id=rsc_sap_NWS_ASCS00-operations \
op monitor interval=11 timeout=60 on_fail=restart \
params InstanceName=NWS_ASCS00_nws-ascs START_PROFILE="/sapmnt/NWS/profile/NWS_ASCS00_nws-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10

crm(live)configure# primitive rsc_sap_NWS_ERS02 SAPInstance \


operations $id=rsc_sap_NWS_ERS02-operations \
op monitor interval=11 timeout=60 on_fail=restart \
params InstanceName=NWS_ERS02_nws-ers START_PROFILE="/sapmnt/NWS/profile/NWS_ERS02_nws-ers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

crm(live)configure# modgroup g-NWS_ASCS add rsc_sap_NWS_ASCS00


crm(live)configure# modgroup g-NWS_ERS add rsc_sap_NWS_ERS02

crm(live)configure# colocation col_sap_NWS_no_both -5000: g-NWS_ERS g-NWS_ASCS


crm(live)configure# location loc_sap_NWS_failover_to_ers rsc_sap_NWS_ASCS00 rule 2000: runs_ers_NWS eq 1
crm(live)configure# order ord_sap_NWS_first_start_ascs Optional: rsc_sap_NWS_ASCS00:start
rsc_sap_NWS_ERS02:stop symmetrical=false

crm(live)configure# commit
crm(live)configure# exit

sudo crm configure property maintenance-mode="false"


sudo crm node online nws-cl-0

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.

sudo crm_mon -r

# Online: [ nws-cl-0 nws-cl-1 ]


#
# Full list of resources:
#
# Master/Slave Set: ms-drbd_NWS_ASCS [drbd_NWS_ASCS]
# Masters: [ nws-cl-0 ]
# Slaves: [ nws-cl-1 ]
# Resource Group: g-NWS_ASCS
# nc_NWS_ASCS (ocf::heartbeat:anything): Started nws-cl-0
# vip_NWS_ASCS (ocf::heartbeat:IPaddr2): Started nws-cl-0
# fs_NWS_ASCS (ocf::heartbeat:Filesystem): Started nws-cl-0
# rsc_sap_NWS_ASCS00 (ocf::heartbeat:SAPInstance): Started nws-cl-0
# Master/Slave Set: ms-drbd_NWS_ERS [drbd_NWS_ERS]
# Masters: [ nws-cl-1 ]
# Slaves: [ nws-cl-0 ]
# Resource Group: g-NWS_ERS
# nc_NWS_ERS (ocf::heartbeat:anything): Started nws-cl-1
# vip_NWS_ERS (ocf::heartbeat:IPaddr2): Started nws-cl-1
# fs_NWS_ERS (ocf::heartbeat:Filesystem): Started nws-cl-1
# rsc_sap_NWS_ERS02 (ocf::heartbeat:SAPInstance): Started nws-cl-1

Create STONITH device


The STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a
Service Principal.
1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade
Go to Properties and write down the Directory Id. This is the tenant id.
3. Click App registrations
4. Click Add
5. Enter a Name, select Application Type "Web app/API", enter a sign-on URL (for example http://localhost) and
click Create
6. The sign-on URL is not used and can be any valid URL
7. Select the new App and click Keys in the Settings tab
8. Enter a description for a new key, select "Never expires" and click Save
9. Write down the Value. It is used as the password for the Service Principal
10. Write down the Application Id. It is used as the username (login id in the steps below) of the Service Principal
The Service Principal does not have permissions to access your Azure resources by default. You need to give the
Service Principal permissions to start and stop (deallocate) all virtual machines of the cluster.
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine
4. Click Access control (IAM)
5. Click Add
6. Select the role Owner
7. Enter the name of the application you created above
8. Click OK
[1] Create the STONITH devices
After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.

sudo crm configure

# replace the bold string with your subscription id, resource group, tenant id, service principal id and
password

crm(live)configure# primitive rsc_st_azure_1 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login id"
passwd="password"

crm(live)configure# primitive rsc_st_azure_2 stonith:fence_azure_arm \


params subscriptionId="subscription id" resourceGroup="resource group" tenantId="tenant id" login="login id"
passwd="password"

crm(live)configure# colocation col_st_azure -2000: rsc_st_azure_1:Started rsc_st_azure_2:Started

crm(live)configure# commit
crm(live)configure# exit

[1] Enable the use of a STONITH device


Enable the use of a STONITH device

sudo crm configure property stonith-enabled=true

Install database
In this example an SAP HANA System Replication is installed and configured. SAP HANA will run in the same
cluster as the SAP NetWeaver ASCS/SCS and ERS. You can also install SAP HANA on a dedicated cluster. See High
Availability of SAP HANA on Azure Virtual Machines (VMs) for more information.
Prepare for SAP HANA installation
We generally recommend using LVM for volumes that store data and log files. For testing purposes, you can also
choose to store the data and log file directly on a plain disk.
1. [A] LVM
The example below assumes that the virtual machines have four data disks attached that should be used to
create two volumes.
Create physical volumes for all disks that you want to use.

sudo pvcreate /dev/sdd


sudo pvcreate /dev/sde
sudo pvcreate /dev/sdf
sudo pvcreate /dev/sdg

Create a volume group for the data files, one volume group for the log files and one for the shared directory
of SAP HANA

sudo vgcreate vg_hana_data /dev/sdd /dev/sde


sudo vgcreate vg_hana_log /dev/sdf
sudo vgcreate vg_hana_shared /dev/sdg

Create the logical volumes

sudo lvcreate -l 100%FREE -n hana_data vg_hana_data


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared
sudo mkfs.xfs /dev/vg_hana_data/hana_data
sudo mkfs.xfs /dev/vg_hana_log/hana_log
sudo mkfs.xfs /dev/vg_hana_shared/hana_shared

Create the mount directories and copy the UUID of all logical volumes

sudo mkdir -p /hana/data


sudo mkdir -p /hana/log
sudo mkdir -p /hana/shared
sudo chattr +i /hana/data
sudo chattr +i /hana/log
sudo chattr +i /hana/shared
# write down the id of /dev/vg_hana_data/hana_data, /dev/vg_hana_log/hana_log and
/dev/vg_hana_shared/hana_shared
sudo blkid

Create autofs entries for the three logical volumes

sudo vi /etc/auto.direct

Insert this line to sudo vi /etc/auto.direct


/hana/data -fstype=xfs :UUID=<UUID of /dev/vg_hana_data/hana_data>
/hana/log -fstype=xfs :UUID=<UUID of /dev/vg_hana_log/hana_log>
/hana/shared -fstype=xfs :UUID=<UUID of /dev/vg_hana_shared/hana_shared>

Mount the new volumes

sudo service autofs restart

2. [A] Plain Disks


For small or demo systems, you can place your HANA data and log files on one disk. The following
commands create a partition on /dev/sdc and format it with xfs.

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk /dev/sdd'


sudo mkfs.xfs /dev/sdd1

# write down the id of /dev/sdd1


sudo /sbin/blkid
sudo vi /etc/auto.direct

Insert this line to /etc/auto.direct

/hana -fstype=xfs :UUID=<UUID>

Create the target directory and mount the disk.

sudo mkdir /hana


sudo chattr +i /hana
sudo service autofs restart

Installing SAP HANA


The following steps are based on chapter 4 of the SAP HANA SR Performance Optimized Scenario guide to install
SAP HANA System Replication. Please read it before you continue the installation.
1. [A] Run hdblcm from the HANA DVD

sudo hdblcm --sid=HDB --number=03 --action=install --batch --password=<password> --


system_user_password=<password for system user>

sudo /hana/shared/HDB/hdblcm/hdblcm --action=configure_internal_network --listen_interface=internal --


internal_network=10.0.0/24 --password=<password for system user> --batch

2. [A] Upgrade SAP Host Agent


Download the latest SAP Host Agent archive from the SAP Softwarecenter and run the following command
to upgrade the agent. Replace the path to the archive to point to the file you downloaded.
sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR>

3. [1] Create HANA replication (as root)


Run the following command. Make sure to replace bold strings (HANA System ID HDB and instance number
03) with the values of your SAP HANA installation.

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'

4. [A] Create keystore entry (as root)

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync <passwd>

5. [1] Backup database (as root)

PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"

6. [1] Switch to the HANA sapsid user and create the primary site.

su - hdbadm
hdbnsutil -sr_enable -name=SITE1

7. [2] Switch to the HANA sapsid user and create the secondary site.

su - hdbadm
sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=nws-cl-0 --remoteInstance=03 --replicationMode=sync --name=SITE2

8. [1] Create SAP HANA cluster resources


First, create the topology.
sudo crm configure

# replace the bold string with your instance number and HANA system id

crm(live)configure# primitive rsc_SAPHanaTopology_HDB_HDB03 ocf:suse:SAPHanaTopology \


operations $id="rsc_sap2_HDB_HDB03-operations" \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="HDB" InstanceNumber="03"

crm(live)configure# clone cln_SAPHanaTopology_HDB_HDB03 rsc_SAPHanaTopology_HDB_HDB03 \


meta is-managed="true" clone-node-max="1" target-role="Started" interleave="true"

crm(live)configure# commit
crm(live)configure# exit

Next, create the HANA resources

sudo crm configure

# replace the bold string with your instance number, HANA system id and the frontend IP address of the
Azure load balancer.

crm(live)configure# primitive rsc_SAPHana_HDB_HDB03 ocf:suse:SAPHana \


operations $id="rsc_sap_HDB_HDB03-operations" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="HDB" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"

crm(live)configure# ms msl_SAPHana_HDB_HDB03 rsc_SAPHana_HDB_HDB03 \


meta is-managed="true" notify="true" clone-max="2" clone-node-max="1" \
target-role="Started" interleave="true"

crm(live)configure# primitive rsc_ip_HDB_HDB03 ocf:heartbeat:IPaddr2 \


meta target-role="Started" is-managed="true" \
operations $id="rsc_ip_HDB_HDB03-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.0.0.12"

crm(live)configure# primitive rsc_nc_HDB_HDB03 anything \


params binfile="/usr/bin/nc" cmdline_options="-l -k 62503" \
op monitor timeout=20s interval=10 depth=0

crm(live)configure# group g_ip_HDB_HDB03 rsc_ip_HDB_HDB03 rsc_nc_HDB_HDB03

crm(live)configure# colocation col_saphana_ip_HDB_HDB03 2000: g_ip_HDB_HDB03:Started \


msl_SAPHana_HDB_HDB03:Master

crm(live)configure# order ord_SAPHana_HDB_HDB03 2000: cln_SAPHanaTopology_HDB_HDB03 \


msl_SAPHana_HDB_HDB03

crm(live)configure# commit
crm(live)configure# exit

Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r

# Online: [ nws-cl-0 nws-cl-1 ]


#
# Full list of resources:
#
# Master/Slave Set: ms-drbd_NWS_ASCS [drbd_NWS_ASCS]
# Masters: [ nws-cl-1 ]
# Slaves: [ nws-cl-0 ]
# Resource Group: g-NWS_ASCS
# nc_NWS_ASCS (ocf::heartbeat:anything): Started nws-cl-1
# vip_NWS_ASCS (ocf::heartbeat:IPaddr2): Started nws-cl-1
# fs_NWS_ASCS (ocf::heartbeat:Filesystem): Started nws-cl-1
# rsc_sap_NWS_ASCS00 (ocf::heartbeat:SAPInstance): Started nws-cl-1
# Master/Slave Set: ms-drbd_NWS_ERS [drbd_NWS_ERS]
# Masters: [ nws-cl-0 ]
# Slaves: [ nws-cl-1 ]
# Resource Group: g-NWS_ERS
# nc_NWS_ERS (ocf::heartbeat:anything): Started nws-cl-0
# vip_NWS_ERS (ocf::heartbeat:IPaddr2): Started nws-cl-0
# fs_NWS_ERS (ocf::heartbeat:Filesystem): Started nws-cl-0
# rsc_sap_NWS_ERS02 (ocf::heartbeat:SAPInstance): Started nws-cl-0
# Clone Set: cln_SAPHanaTopology_HDB_HDB03 [rsc_SAPHanaTopology_HDB_HDB03]
# Started: [ nws-cl-0 nws-cl-1 ]
# Master/Slave Set: msl_SAPHana_HDB_HDB03 [rsc_SAPHana_HDB_HDB03]
# Masters: [ nws-cl-0 ]
# Slaves: [ nws-cl-1 ]
# Resource Group: g_ip_HDB_HDB03
# rsc_ip_HDB_HDB03 (ocf::heartbeat:IPaddr2): Started nws-cl-0
# rsc_nc_HDB_HDB03 (ocf::heartbeat:anything): Started nws-cl-0
# rsc_st_azure_1 (stonith:fence_azure_arm): Started nws-cl-0
# rsc_st_azure_2 (stonith:fence_azure_arm): Started nws-cl-1

9. [1] Install the SAP NetWeaver database instance


Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of
the load balancer frontend configuration for the database for example nws-db and 10.0.0.12.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server. The steps bellow assume that you install the application
server on a server different from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like
configuring host name resolution) are not needed.
1. Setup host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Replace the IP address and the hostname in the following commands

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
# IP address of the load balancer frontend configuration for NFS
10.0.0.4 nws-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.10 nws-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.11 nws-ers
# IP address of the load balancer frontend configuration for database
10.0.0.12 nws-db
# IP address of the application server
10.0.0.8 nws-di-0

2. Create the sapmnt directory

sudo mkdir -p /sapmnt/NWS


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NWS


sudo chattr +i /usr/sap/trans

3. Configure autofs

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a new file with

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NWS -nfsvers=4,nosymlink,sync nws-nfs:/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nws-nfs:/trans

Restart autofs to mount the new shares

sudo systemctl enable autofs


sudo service autofs restart

4. Configure SWAP file


sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make sure that you do not set a value
that is too big. You can check the SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

sudo service waagent restart

5. Install SAP NetWeaver application server


Install a primary or additional SAP NetWeaver applications server.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to
sapinst.

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

6. Update SAP HANA secure store


Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.

su - nwsadm
hdbuserstore SET DEFAULT nws-db:30315 SAPABAP1 <password of ABAP schema>

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Create an SAP NetWeaver multi-SID configuration
8/21/2017 7 min to read Edit Online

In September 2016, Microsoft released a feature where you can manage multiple virtual IP addresses by using an
Azure internal load balancer. This functionality already exists in the Azure external load balancer.
If you have an SAP deployment, you can use an internal load balancer to create a Windows cluster configuration
for SAP ASCS/SCS, as documented in the guide for high-availability SAP NetWeaver on Windows VMs.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster. When this process is completed, you will have configured an SAP multi-SID cluster.

NOTE
This feature is available only in the Azure Resource Manager deployment model.

Prerequisites
You have already configured a WSFC cluster that is used for one SAP ASCS/SCS instance, as discussed in the guide
for high-availability SAP NetWeaver on Windows VMs and as shown in this diagram.
Target architecture
The goal is to install multiple SAP ABAP ASCS or SAP Java SCS clustered instances in the same WSFC cluster, as
illustrated here:
NOTE
There is a limit to the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-end
IPs for each Azure internal load balancer.

For more information about load-balancer limits, see "Private front end IP per load balancer" in Networking limits:
Azure Resource Manager.
The complete landscape with two high-availability SAP systems would look like this:
IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Each DBMS SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.

Prepare the infrastructure


To prepare your infrastructure, you can install an additional SAP ASCS/SCS instance with the following parameters:

PARAMETER NAME VALUE

SAP ASCS/SCS SID pr1-lb-ascs

SAP DBMS internal load balancer PR5

SAP virtual host name pr5-sap-cl

SAP ASCS/SCS virtual host IP address (additional Azure load 10.0.0.50


balancer IP address)

SAP ASCS/SCS instance number 50


PARAMETER NAME VALUE

ILB probe port for additional SAP ASCS/SCS instance 62350

NOTE
For SAP ASCS/SCS cluster instances, each IP address requires a unique probe port. For example, if one IP address on an Azure
internal load balancer uses probe port 62300, no other IP address on that load balancer can use probe port 62300.
For our purposes, because probe port 62300 is already reserved, we are using probe port 62350.

You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two nodes:

VIRTUAL MACHINE ROLE VIRTUAL MACHINE HOST NAME STATIC IP ADDRESS

1st cluster node for ASCS/SCS instance pr1-ascs-0 10.0.0.10

2nd cluster node for ASCS/SCS instance pr1-ascs-1 10.0.0.9

Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS server
You can create a DNS entry for the virtual host name of the ASCS/SCS instance by using the following parameters:

NEW SAP ASCS/SCS VIRTUAL HOST NAME ASSOCIATED IP ADDRESS

pr5-sap-cl 10.0.0.50

The new host name and IP address are displayed in the DNS Manager, as shown in the following screenshot:

The procedure for creating a DNS entry is also described in detail in the main guide for high-availability SAP
NetWeaver on Windows VMs.
NOTE
The new IP address that you assign to the virtual host name of the additional ASCS/SCS instance must be the same as the
new IP address that you assigned to the SAP Azure load balancer.
In our scenario, the IP address is 10.0.0.50.

Add an IP address to an existing Azure internal load balancer by using PowerShell


To create more than one SAP ASCS/SCS instance in the same WSFC cluster, use PowerShell to add an IP address to
an existing Azure internal load balancer. Each IP address requires its own load-balancing rules, probe port, front-
end IP pool, and back-end pool.
The following script adds a new IP address to an existing load balancer. Update the PowerShell variables for your
environment. The script will create all needed load-balancing rules for all SAP ASCS/SCS ports.

# Select-AzureRmSubscription -SubscriptionId <xxxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>


Clear-Host
$ResourceGroupName = "SAP-MULTI-SID-Landscape" # Existing resource group name
$VNetName = "pr2-vnet" # Existing virtual network name
$SubnetName = "Subnet" # Existing subnet name
$ILBName = "pr2-lb-ascs" # Existing ILB name
$ILBIP = "10.0.0.50" # New IP address
$VMNames = "pr2-ascs-0","pr2-ascs-1" # Existing cluster virtual machine names
$SAPInstanceNumber = 50 # SAP ASCS/SCS instance number: must be a unique value for each
cluster
[int]$ProbePort = "623$SAPInstanceNumber" # Probe port: must be a unique value for each IP and load
balancer

$ILB = Get-AzureRmLoadBalancer -Name $ILBName -ResourceGroupName $ResourceGroupName

$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"

# Get the Azure VNet and subnet


$VNet = Get-AzureRmVirtualNetwork -Name $VNetName -ResourceGroupName $ResourceGroupName
$Subnet = Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $VNet -Name $SubnetName

# Add second front-end and probe configuration


Write-Host "Adding new front end IP Pool '$FrontEndConfigurationName' ..." -ForegroundColor Green
$ILB | Add-AzureRmLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -PrivateIpAddress $ILBIP -
SubnetId $Subnet.Id
$ILB | Add-AzureRmLoadBalancerProbeConfig -Name $LBProbeName -Protocol Tcp -Port $Probeport -ProbeCount 2 -
IntervalInSeconds 10 | Set-AzureRmLoadBalancer

# Get new updated configuration


$ILB = Get-AzureRmLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName
# Get new updated LP FrontendIP COnfig
$FEConfig = Get-AzureRmLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzureRmLoadBalancerProbeConfig -Name $LBProbeName -LoadBalancer $ILB

# Add new back-end configuration into existing ILB


$BackEndConfigurationName = "backendPoolASCS$count"
Write-Host "Adding new backend Pool '$BackEndConfigurationName' ..." -ForegroundColor Green
$BEConfig = Add-AzureRmLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB
| Set-AzureRmLoadBalancer

# Get new updated config


$ILB = Get-AzureRmLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName

# Assign VM NICs to backend pool


$BEPool = Get-AzureRmLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB
foreach($VMName in $VMNames){
$VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VMName
$VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VMName
$NICName = ($VM.NetworkInterfaceIDs[0].Split('/') | select -last 1)
$NIC = Get-AzureRmNetworkInterface -name $NICName -ResourceGroupName $ResourceGroupName
$NIC.IpConfigurations[0].LoadBalancerBackendAddressPools += $BEPool
Write-Host "Assigning network card '$NICName' of the '$VMName' VM to the backend pool
'$BackEndConfigurationName' ..." -ForegroundColor Green
Set-AzureRmNetworkInterface -NetworkInterface $NIC
#start-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VM.Name
}

# Create load-balancing rules


$Ports =
"445","32$SAPInstanceNumber","33$SAPInstanceNumber","36$SAPInstanceNumber","39$SAPInstanceNumber","5985","81$S
APInstanceNumber","5$SAPInstanceNumber`13","5$SAPInstanceNumber`14","5$SAPInstanceNumber`16"
$ILB = Get-AzureRmLoadBalancer -Name $ILBname -ResourceGroupName $ResourceGroupName
$FEConfig = get-AzureRMLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -LoadBalancer $ILB
$BEConfig = Get-AzureRmLoadBalancerBackendAddressPoolConfig -Name $BackEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzureRmLoadBalancerProbeConfig -Name $LBProbeName -LoadBalancer $ILB

Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -ForegroundColor Green

foreach ($Port in $Ports) {

$LBConfigrulename = "lbrule$Port" + "_$count"


Write-Host "Creating load balancing rule '$LBConfigrulename' for the port '$Port' ..." -
ForegroundColor Green

$ILB | Add-AzureRmLoadBalancerRuleConfig -Name $LBConfigRuleName -FrontendIpConfiguration $FEConfig -


BackendAddressPool $BEConfig -Probe $HealthProbe -Protocol tcp -FrontendPort $Port -BackendPort $Port -
IdleTimeoutInMinutes 30 -LoadDistribution Default -EnableFloatingIP
}

$ILB | Set-AzureRmLoadBalancer

Write-Host "Succesfully added new IP '$ILBIP' to the internal load balancer '$ILBName'!" -ForegroundColor
Green

After the script has run, the results are displayed in the Azure portal, as shown in the following screenshot:

Add disks to cluster machines, and configure the SIOS cluster share disk
You must add a new cluster share disk for each additional SAP ASCS/SCS instance. For Windows Server 2012 R2,
the WSFC cluster share disk currently in use is the SIOS DataKeeper software solution.
Do the following:
1. Add an additional disk or disks of the same size (which you need to stripe) to each of the cluster nodes, and
format them.
2. Configure storage replication with SIOS DataKeeper.
This procedure assumes that you have already installed SIOS DataKeeper on the WSFC cluster machines. If you
have installed it, you must now configure replication between the machines. The process is described in detail in
the main guide for high-availability SAP NetWeaver on Windows VMs.

Deploy VMs for SAP application servers and DBMS cluster


To complete the infrastructure preparation for the second SAP system, do the following:
1. Deploy dedicated VMs for SAP application servers and put them in their own dedicated availability group.
2. Deploy dedicated VMs for DBMS cluster and put them in their own dedicated availability group.

Install the second SAP SID2 NetWeaver system


The complete process of installing a second SAP SID2 system is described in the main guide for high-availability
SAP NetWeaver on Windows VMs.
The high-level procedure is as follows:
1. Install the SAP first cluster node.
In this step, you are installing SAP with a high-availability ASCS/SCS instance on the EXISTING WSFC
cluster node 1.
2. Modify the SAP profile of the ASCS/SCS instance.
3. Configure a probe port.
In this step, you are configuring an SAP cluster resource SAP-SID2-IP probe port by using PowerShell.
Execute this configuration on one of the SAP ASCS/SCS cluster nodes.
4. Install the database instance.
In this step, you are installing DBMS on a dedicated WSFC cluster.
5. Install the second cluster node.
In this step, you are installing SAP with a high-availability ASCS/SCS instance on the existing WSFC cluster
node 2.
6. Open Windows Firewall ports for the SAP ASCS/SCS instance and ProbePort.
On both cluster nodes that are used for SAP ASCS/SCS instances, you are opening all Windows Firewall
ports that are used by SAP ASCS/SCS. These ports are listed in the guide for high-availability SAP
NetWeaver on Windows VMs.
Also open the Azure internal load balancer probe port, which is 62350 in our scenario.
7. Change the start type of the SAP ERS Windows service instance.
8. Install the SAP primary application server on the new dedicated VM.
9. Install the SAP additional application server on the new dedicated VM.
10. Test the SAP ASCS/SCS instance failover and SIOS replication.

Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
Guide for high-availability SAP NetWeaver on Windows VMs
Azure Virtual Machines deployment for SAP
NetWeaver
8/21/2017 43 min to read Edit Online

NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.

Azure Virtual Machines is the solution for organizations that need compute and storage resources, in minimal
time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classical
applications, like SAP NetWeaver-based applications, in Azure. Extend an application's reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so
you can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and
SAP system landscape.
In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure, including
alternate deployment options and troubleshooting. This article builds on the information in Azure Virtual
Machines planning and implementation for SAP NetWeaver. It also complements SAP installation
documentation and SAP Notes, which are the primary resources for installing and deploying SAP software.

Prerequisites
Setting up an Azure virtual machine for SAP software deployment involves multiple steps and resources.
Before you start, make sure that you meet the prerequisites for installing SAP software on virtual machines in
Azure.
Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal. For both tools, you
need a local computer running Windows 7 or a later version of Windows. If you want to manage only Linux
VMs and you want to use a Linux computer for this task, you can use Azure CLI.
Internet connection
To download and run the tools and scripts that are required for SAP software deployment, you must be
connected to the Internet. The Azure VM that is running the Azure Enhanced Monitoring Extension for SAP also
needs access to the Internet. If the Azure VM is part of an Azure virtual network or on-premises domain, make
sure that the relevant proxy settings are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.
Topology and networking
You need to define the topology and architecture of the SAP deployment in Azure:
Azure storage accounts to be used
Virtual network where you want to deploy the SAP system
Resource group to which you want to deploy the SAP system
Azure region where you want to deploy the SAP system
SAP configuration (two-tier or three-tier)
VM sizes and the number of additional data disks to be mounted to the VMs
SAP Correction and Transport System (CTS) configuration
Create and configure Azure storage accounts (if required) or Azure virtual networks before you begin the SAP
software deployment process. For information about how to create and configure these resources, see Azure
Virtual Machines planning and implementation for SAP NetWeaver.
SAP sizing
Know the following information, for SAP sizing:
Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the SAP Application
Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed SAP system
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application resources in your
Azure subscription. For more information, see Azure Resource Manager overview.

Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP resources:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.
SAP Note 2069760 has general information about Oracle Linux 7.x.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Note 1597355 has general information about swap-space for Linux.
SAP on Azure SCN page has news and a collection of useful resources.
SAP Community WIKI has all required SAP Notes for Linux.
SAP-specific PowerShell cmdlets that are part of Azure PowerShell.
SAP-specific Azure CLI commands that are part of Azure CLI.
Windows resources
These Microsoft articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver (this article)
Azure Virtual Machines DBMS deployment for SAP NetWeaver

Deployment scenarios for SAP software on Azure VMs


You have multiple options for deploying VMs and associated disks in Azure. It's important to understand the
differences between deployment options, because you might take different steps to prepare your VMs for
deployment based on the deployment type you choose.
Scenario 1: Deploying a VM from the Azure Marketplace for SAP
You can use an image provided by Microsoft or by a third party in the Azure Marketplace to deploy your VM.
The Marketplace offers some standard OS images of Windows Server and different Linux distributions. You
also can deploy an image that includes database management system (DBMS) SKUs, for example, Microsoft
SQL Server. For more information about using images with DBMS SKUs, see Azure Virtual Machines DBMS
deployment for SAP NetWeaver.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from the Azure
Marketplace:

Create a virtual machine by using the Azure portal


The easiest way to create a new virtual machine with an image from the Azure Marketplace is by using the
Azure portal.
1. Go to https://portal.azure.com/#create/hub. Or, in the Azure portal menu, select + New.
2. Select Compute, and then select the type of operating system you want to deploy. For example, Windows
Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise Linux 7.2 (RHEL 7.2), or
Oracle Linux 7.2. The default list view does not show all supported operating systems. Select see all for a
full list. For more information about supported operating systems for SAP software deployment, see SAP
Note 1928533.
3. On the next page, review terms and conditions.
4. In the Select a deployment model box, select Resource Manager.
5. Select Create.
The wizard guides you through setting the required parameters to create the virtual machine, in addition to all
required resources, like network interfaces and storage accounts. Some of these parameters are:
1. Basics:
Name: The name of the resource (the virtual machine name).
VM disk type: Select the disk type of the OS disk. If you want to use Premium Storage for your data
disks, we recommend using Premium Storage for the OS disk as well.
Username and password or SSH public key: Enter the username and password of the user that is
created during the provisioning. For a Linux virtual machine, you can enter the public Secure Shell
(SSH) key that you use to sign in to the machine.
Subscription: Select the subscription that you want to use to provision the new virtual machine.
Resource group: The name of the resource group for the VM. You can enter either the name of a
new resource group or the name of a resource group that already exists.
Location: Where to deploy the new virtual machine. If you want to connect the virtual machine to
your on-premises network, make sure you select the location of the virtual network that connects
Azure to your on-premises network. For more information, see Microsoft Azure networking in Azure
Virtual Machines planning and implementation for SAP NetWeaver.
2. Size:
For a list of supported VM types, see SAP Note 1928533. Be sure you select the correct VM type if you
want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information,
see Storage: Microsoft Azure Storage and data disks and Azure Premium Storage in Azure Virtual
Machines planning and implementation for SAP NetWeaver.
3. Settings:
Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium Storage for your
data disks, we recommend using Premium Storage for the OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For more information
about Managed Disks, see chapter Managed Disks in the planning guide.
Storage account: Select an existing storage account or create a new one. Not all storage
types work for running SAP applications. For more information about storage types, see
Microsoft Azure Storage in Azure Virtual Machines DBMS deployment for SAP NetWeaver.
Network
Virtual network and Subnet: To integrate the virtual machine with your intranet, select the
virtual network that is connected to your on-premises network.
Public IP address: Select the public IP address that you want to use, or enter parameters to
create a new public IP address. You can use a public IP address to access your virtual machine
over the Internet. Make sure that you also create a network security group to help secure
access to your virtual machine.
Network security group: For more information, see Control network traffic flow with
network security groups.
Extensions: You can install virtual machine extensions by adding them to the deployment. You do
not need to add extensions in this step. The extensions required for SAP support are installed later.
See chapter Configure the Azure Enhanced Monitoring Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a new availability set.
For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.
4. Summary:
Review your selections, and then select OK.
Your virtual machine is deployed in the resource group you selected.
Create a virtual machine by using a template
You can create a virtual machine by using one of the SAP templates published in the azure-quickstart-
templates GitHub repository. You also can manually create a virtual machine by using the Azure portal,
PowerShell, or Azure CLI.
Two-tier configuration (only one virtual machine) template (sap-2-tier-marketplace-image)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disks (sap-2-tier-
marketplace-image-md)
To create a two-tier system by using only one virtual machine and Managed Disks, use this template.
Three-tier configuration (multiple virtual machines) template (sap-3-tier-marketplace-image)
To create a three-tier system by using multiple virtual machines, use this template.
Three-tier configuration (multiple virtual machines) template - Managed Disks (sap-3-tier-
marketplace-image-md)
To create a three-tier system by using multiple virtual machines and Managed Disks, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
Subscription: The subscription to use to deploy the template.
Resource group: The resource group to use to deploy the template. You can create a new resource
group, or you can select an existing resource group in the subscription.
Location: Where to deploy the template. If you selected an existing resource group, the location of
that resource group is used.
2. Settings:
SAP System ID: The SAP System ID (SID).
OS type: The operating system you want to deploy, for example, Windows Server 2012 R2, SUSE
Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2.
The list view does not show all supported operating systems. For more information about
supported operating systems for SAP software deployment, see SAP Note 1928533.
SAP system size: The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator.
System availability (three-tier template only): The system availability.
Select HA for a configuration that is suitable for a high-availability installation. Two database
servers and two servers for ABAP SAP Central Services (ASCS) are created.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more information
about storage types, see these resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Microsoft Azure Storage in Azure Virtual Machines DBMS deployment for SAP NetWeaver
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
Admin username and Admin password: A username and password. A new user is created, for
signing in to the virtual machine.
New or existing subnet: Determines whether a new virtual network and subnet are created or an
existing subnet is used. If you already have a virtual network that is connected to your on-premises
network, select Existing.
Subnet ID: The ID of the subnet the virtual machines will connect to. Select the subnet of your virtual
private network (VPN) or Azure ExpressRoute virtual network to use to connect the virtual machine
to your on-premises network. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet
name>
3. Terms and conditions:
Review and accept the legal terms.
4. Select Purchase.
The Azure VM Agent is deployed by default when you use an image from the Azure Marketplace.
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your VM. If
your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not be able to
access the Internet, and won't be able to download the required extensions or collect monitoring data. For
more information, see Configure the proxy.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure site-
to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines planning and
implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises domain. For more
information about considerations for this task, see Join a VM to an on-premises domain (Windows only).
Configure monitoring
To be sure SAP supports your environment, set up the Azure Monitoring Extension for SAP as described in
Configure the Azure Enhanced Monitoring Extension for SAP. Check the prerequisites for SAP monitoring, and
required minimum versions of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
Monitoring check
Check whether monitoring is working, as described in Checks and troubleshooting for setting up end-to-end
monitoring.
Post-deployment steps
After you create the VM and the VM is deployed, you need to install the required software components in the
VM. Because of the deployment/software installation sequence in this type of VM deployment, the software to
be installed must already be available, either in Azure, on another VM, or as a disk that can be attached. Or,
consider using a cross-premises scenario, in which connectivity to the on-premises assets (installation shares)
is given.
After you deploy your VM in Azure, follow the same guidelines and tools to install the SAP software on your
VM as you would in an on-premises environment. To install SAP software on an Azure VM, both SAP and
Microsoft recommend that you upload and store the SAP installation media on Azure VHDs or Managed Disks,
or that you create an Azure VM that works as a file server that has all the required SAP installation media.
Scenario 2: Deploying a VM with a custom image for SAP
Because different versions of an operating system or DBMS have different patch requirements, the images you
find in the Azure Marketplace might not meet your needs. You might instead want to create a VM by using
your own OS/DBMS VM image, which you can deploy again later. You use different steps to create a private
image for Linux than to create one for Windows.

Windows
To prepare a Windows image that you can use to deploy multiple virtual machines, the Windows settings
(like Windows SID and hostname) must be abstracted or generalized on the on-premises VM. You can use
sysprep to do this.

Linux
To prepare a Linux image that you can use to deploy multiple virtual machines, some Linux settings must
be abstracted or generalized on the on-premises VM. You can use waagent -deprovision to do this. For
more information, see Capture a Linux virtual machine running on Azure and the Azure Linux agent user
guide.

You can prepare and create a custom image, and then use it to create multiple new VMs. This is described in
Azure Virtual Machines planning and implementation for SAP NetWeaver. Set up your database content either
by using SAP Software Provisioning Manager to install a new SAP system (restores a database backup from a
disk that's attached to the virtual machine) or by directly restoring a database backup from Azure storage, if
your DBMS supports it. For more information, see Azure Virtual Machines DBMS deployment for SAP
NetWeaver. If you have already installed an SAP system on your on-premises VM (especially for two-tier
systems), you can adapt the SAP system settings after the deployment of the Azure VM by using the System
Rename procedure supported by SAP Software Provisioning Manager (SAP Note 1619720). Otherwise, you
can install the SAP software after you deploy the Azure VM.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from a custom image:

Create a virtual machine by using the Azure portal


The easiest way to create a new virtual machine from a Managed Disk image is by using the Azure portal. For
more information on how to create a Manage Disk Image, read Capture a managed image of a generalized VM
in Azure
1. Go to
https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Compute%2Fimages.
Or, in the Azure portal menu, select Images.
2. Select the Managed Disk image you want to deploy and click on Create VM
The wizard guides you through setting the required parameters to create the virtual machine, in addition to all
required resources, like network interfaces and storage accounts. Some of these parameters are:
1. Basics:
Name: The name of the resource (the virtual machine name).
VM disk type: Select the disk type of the OS disk. If you want to use Premium Storage for your data
disks, we recommend using Premium Storage for the OS disk as well.
Username and password or SSH public key: Enter the username and password of the user that is
created during the provisioning. For a Linux virtual machine, you can enter the public Secure Shell
(SSH) key that you use to sign in to the machine.
Subscription: Select the subscription that you want to use to provision the new virtual machine.
Resource group: The name of the resource group for the VM. You can enter either the name of a
new resource group or the name of a resource group that already exists.
Location: Where to deploy the new virtual machine. If you want to connect the virtual machine to
your on-premises network, make sure you select the location of the virtual network that connects
Azure to your on-premises network. For more information, see Microsoft Azure networking in Azure
Virtual Machines planning and implementation for SAP NetWeaver.
2. Size:
For a list of supported VM types, see SAP Note 1928533. Be sure you select the correct VM type if you
want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information,
see Storage: Microsoft Azure Storage and data disks and Azure Premium Storage in Azure Virtual
Machines planning and implementation for SAP NetWeaver.
3. Settings:
Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium Storage for your
data disks, we recommend using Premium Storage for the OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For more information
about Managed Disks, see chapter Managed Disks in the planning guide.
Network
Virtual network and Subnet: To integrate the virtual machine with your intranet, select the
virtual network that is connected to your on-premises network.
Public IP address: Select the public IP address that you want to use, or enter parameters to
create a new public IP address. You can use a public IP address to access your virtual machine
over the Internet. Make sure that you also create a network security group to help secure
access to your virtual machine.
Network security group: For more information, see Control network traffic flow with
network security groups.
Extensions: You can install virtual machine extensions by adding them to the deployment. You do
not need to add extension in this step. The extensions required for SAP support are installed later. See
chapter Configure the Azure Enhanced Monitoring Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a new availability set.
For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.
4. Summary:
Review your selections, and then select OK.
Your virtual machine is deployed in the resource group you selected.
Create a virtual machine by using a template
To create a deployment by using a private OS image from the Azure portal, use one of the following SAP
templates. These templates are published in the azure-quickstart-templates GitHub repository. You also can
manually create a virtual machine, by using PowerShell.
Two-tier configuration (only one virtual machine) template (sap-2-tier-user-image)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk Image (sap-2-tier-
user-image-md)
To create a two-tier system by using only one virtual machine and a Managed Disk image, use this
template.
Three-tier configuration (multiple virtual machines) template (sap-3-tier-user-image)
To create a three-tier system by using multiple virtual machines or your own OS image, use this
template.
Three-tier configuration (multiple virtual machines) template - Managed Disk Image (sap-3-
tier-user-image-md)
To create a three-tier system by using multiple virtual machines or your own OS image and a Managed
Disk image, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
Subscription: The subscription to use to deploy the template.
Resource group: The resource group to use to deploy the template. You can create a new resource
group or select an existing resource group in the subscription.
Location: Where to deploy the template. If you selected an existing resource group, the location of
that resource group is used.
2. Settings:
SAP System ID: The SAP System ID.
OS type: The operating system type you want to deploy (Windows or Linux).
SAP system size: The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator.
System availability (three-tier template only): The system availability.
Select HA for a configuration that is suitable for a high-availability installation. Two database
servers and two servers for ASCS are created.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more information
about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Microsoft Azure Storage in Azure Virtual Machines DBMS deployment for SAP NetWeaver
Premium Storage: High-performance storage for Azure virtual machine workloads
Introduction to Microsoft Azure Storage
User image VHD URI (unmanaged disk image template only): The URI of the private OS image VHD,
for example, https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
User image storage account (unmanaged disk image template only): The name of the storage
account where the private OS image is stored, for example, <accountname> in
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
userImageId (managed disk image template only): Id of the Managed Disk image you want to use
Admin username and Admin password: The username and password.
A new user is created, for signing in to the virtual machine.
New or existing subnet: Determines whether a new virtual network and subnet is created or an
existing subnet is used. If you already have a virtual network that is connected to your on-premises
network, select Existing.
Subnet ID: The ID of the subnet to which the virtual machines will connect to. Select the subnet
of your VPN or ExpressRoute virtual network to use to connect the virtual machine to your on-
premises network. The ID usually looks like this:
/subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet
name>
3. Terms and conditions:
Review and accept the legal terms.
4. Select Purchase.
Install the VM Agent (Linux only )
To use the templates described in the preceding section, the Linux Agent must already be installed in the user
image, or the deployment will fail. Download and install the VM Agent in the user image as described in
Download, install, and enable the Azure VM Agent. If you dont use the templates, you also can install the VM
Agent later.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure site-
to-site VPN connection or Azure ExpressRoute (this is called cross-premises in Azure Virtual Machines planning
and implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises domain. For
more information about considerations for this step, see Join a VM to an on-premises domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your VM. If
your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not be able to
access the Internet, and won't be able to download the required extensions or collect monitoring data. For
more information, see Configure the proxy.
Configure monitoring
To be sure SAP supports your environment, set up the Azure Monitoring Extension for SAP as described in
Configure the Azure Enhanced Monitoring Extension for SAP. Check the prerequisites for SAP monitoring, and
required minimum versions of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
Monitoring check
Check whether monitoring is working, as described in Checks and troubleshooting for setting up end-to-end
monitoring.
Scenario 3: Moving an on-premises VM by using a non-generalized Azure VHD with SAP
In this scenario, you plan to move a specific SAP system from an on-premises environment to Azure. You can
do this by uploading the VHD that has the OS, the SAP binaries, and eventually the DBMS binaries, plus the
VHDs with the data and log files of the DBMS, to Azure. Unlike the scenario described in Scenario 2: Deploying
a VM with a custom image for SAP, in this case, you keep the hostname, SAP SID, and SAP user accounts in the
Azure VM, because they were configured in the on-premises environment. You do not need to generalize the
OS. This scenario applies most often to cross-premises scenarios where part of the SAP landscape runs on-
premises and part of it runs on Azure.
In this scenario, the VM Agent is not automatically installed during deployment. Because the VM Agent and the
Azure Enhanced Monitoring Extension for SAP are required to run SAP NetWeaver on Azure, you need to
download, install, and enable both components manually after you create the virtual machine.
For more information about the Azure VM Agent, see the following resources.

Windows
Azure Virtual Machine Agent overview

Linux
Azure Linux Agent User Guide

The following flowchart shows the sequence of steps for moving an on-premises VM by using a non-
generalized Azure VHD:
If the disk is already uploaded and defined in Azure (see Azure Virtual Machines planning and implementation
for SAP NetWeaver), do the tasks described in the next few sections.
Create a virtual machine
To create a deployment by using a private OS disk through the Azure portal, use the SAP template published in
the azure-quickstart-templates GitHub repository. You also can manually create a virtual machine, by using
PowerShell.
Two-tier configuration (only one virtual machine) template (sap-2-tier-user-disk)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk (sap-2-tier-user-
disk-md)
To create a two-tier system by using only one virtual machine and a Managed Disk, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
Subscription: The subscription to use to deploy the template.
Resource group: The resource group to use to deploy the template. You can create a new resource
group or select an existing resource group in the subscription.
Location: Where to deploy the template. If you selected an existing resource group, the location of
that resource group is used.
2. Settings:
SAP System ID: The SAP System ID.
OS type: The operating system type you want to deploy (Windows or Linux).
SAP system size: The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more information
about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Microsoft Azure Storage in Azure Virtual Machine DBMS deployment for SAP NetWeaver
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
OS disk VHD URI (unmanaged disk template only): The URI of the private OS disk, for example,
https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.
OS disk Managed Disk Id (managed disk template only): The Id of the Managed Disk OS disk,
/subscriptions/92d102f7-81a5-4df7-9877-
54987ba97dd9/resourceGroups/group/providers/Microsoft.Compute/disks/WIN
New or existing subnet: Determines whether a new virtual network and subnet are created, or an
existing subnet is used. If you already have a virtual network that is connected to your on-premises
network, select Existing.
Subnet ID: The ID of the subnet to which the virtual machines will connect to. Select the subnet
of your VPN or Azure ExpressRoute virtual network to use to connect the virtual machine to your
on-premises network. The ID usually looks like this:
/subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet
name>
3. Terms and conditions:
Review and accept the legal terms.
4. Select Purchase.
Install the VM Agent
To use the templates described in the preceding section, the VM Agent must be installed on the OS disk, or the
deployment will fail. Download and install the VM Agent in the VM, as described in Download, install, and
enable the Azure VM Agent.
If you don't use the templates described in the preceding section, you can also install the VM Agent afterwards.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure site-
to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines planning and
implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises domain. For more
information about considerations for this task, see Join a VM to an on-premises domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your VM. If
your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not be able to
access the Internet, and won't be able to download the required extensions or collect monitoring data. For
more information, see Configure the proxy.
Configure monitoring
To be sure SAP supports your environment, set up the Azure Monitoring Extension for SAP as described in
Configure the Azure Enhanced Monitoring Extension for SAP. Check the prerequisites for SAP monitoring, and
required minimum versions of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
Monitoring check
Check whether monitoring is working, as described in Checks and troubleshooting for setting up end-to-end
monitoring.

Update the monitoring configuration for SAP


Update the SAP monitoring configuration in any of the following scenarios:
The joint Microsoft/SAP team extends the monitoring capabilities and requests more or fewer counters.
Microsoft introduces a new version of the underlying Azure infrastructure that delivers the monitoring data,
and the Azure Enhanced Monitoring Extension for SAP needs to be adapted to those changes.
You mount additional data disks to your Azure VM or you remove a data disk. In this scenario, update the
collection of storage-related data. Changing your configuration by adding or deleting endpoints or by
assigning IP addresses to a VM does not affect the monitoring configuration.
You change the size of your Azure VM, for example, from size A5 to any other VM size.
You add new network interfaces to your Azure VM.
To update monitoring settings, update the monitoring infrastructure by following the steps in Configure the
Azure Enhanced Monitoring Extension for SAP.
Detailed tasks for SAP software deployment
This section has detailed steps for doing specific tasks in the configuration and deployment process.
Deploy Azure PowerShell cmdlets
1. Go to Microsoft Azure Downloads.
2. Under Command-line tools, under PowerShell, select Windows install.
3. In the Microsoft Download Manager dialog box, for the downloaded file (for example,
WindowsAzurePowershellGet.3f.3f.3fnew.exe), select Run.
4. To run Microsoft Web Platform Installer (Microsoft Web PI), select Yes.
5. A page that looks like this appears:

6. Select Install, and then accept the Microsoft Software License Terms.
7. PowerShell is installed. Select Finish to close the installation wizard.
Check frequently for updates to the PowerShell cmdlets, which usually are updated monthly. The easiest way to
check for updates is to do the preceding installation steps, up to the installation page shown in step 5. The
release date and release number of the cmdlets are included on the page shown in step 5. Unless stated
otherwise in SAP Note 1928533 or SAP Note 2015553, we recommend that you work with the latest version of
Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your computer, run this PowerShell
command:

(Get-Module AzureRm.Compute).Version

The result looks like this:


If the Azure cmdlet version installed on your computer is the current version, the first page of the installation
wizard indicates it by adding (Installed) to the product title (see the following screenshot). Your PowerShell
Azure cmdlets are up-to-date. To close the installation wizard, select Exit.

Deploy Azure CLI


1. Go to Microsoft Azure Downloads.
2. Under Command-line tools, under Azure command-line interface, select the Install link for your
operating system.
3. In the Microsoft Download Manager dialog box, for the downloaded file (for example,
WindowsAzureXPlatCLI.3f.3f.3fnew.exe), select Run.
4. To run Microsoft Web Platform Installer (Microsoft Web PI), select Yes.
5. A page that looks like this appears:
6. Select Install, and then accept the Microsoft Software License Terms.
7. Azure CLI is installed. Select Finish to close the installation wizard.
Check frequently for updates to Azure CLI, which usually is updated monthly. The easiest way to check for
updates is to do the preceding installation steps, up to the installation page shown in step 5.
To check the version of Azure CLI that is installed on your computer, run this command:

azure --version

The result looks like this:

Join a VM to an on-premises domain (Windows only)


If you deploy SAP VMs in a cross-premises scenario, where on-premises Active Directory and DNS are
extended in Azure, it is expected that the VMs are joining an on-premises domain. The detailed steps you take
to join a VM to an on-premises domain, and the additional software required to be a member of an on-
premises domain, varies by customer. Usually, to join a VM to an on-premises domain, you need to install
additional software, like antimalware software, and backup or monitoring software.
In this scenario, you also need to make sure that if Internet proxy settings are forced when a VM joins a domain
in your environment, the Windows Local System Account (S-1-5-18) in the Guest VM has the same proxy
settings. The easiest option is to force the proxy by using a domain Group Policy, which applies to systems in
the domain.
Download, install, and enable the Azure VM Agent
For virtual machines that are deployed from an OS image that is not generalized (for example, an image that
doesn't originate in the Windows System Preparation, or sysprep, tool), you need to manually download,
install, and enable the Azure VM Agent.
If you deploy a VM from the Azure Marketplace, this step is not required. Images from the Azure Marketplace
already have the Azure VM Agent.
Windows
1. Download the Azure VM Agent:
a. Download the Azure VM Agent installer package.
b. Store the VM Agent MSI package locally on a personal computer or server.
2. Install the Azure VM Agent:
a. Connect to the deployed Azure VM by using Remote Desktop Protocol (RDP).
b. Open a Windows Explorer window on the VM and select the target directory for the MSI file of the
VM Agent.
c. Drag the Azure VM Agent Installer MSI file from your local computer/server to the target directory of
the VM Agent on the VM.
d. Double-click the MSI file on the VM.
3. For VMs that are joined to on-premises domains, make sure that eventual Internet proxy settings also apply
to the Windows Local System account (S-1-5-18) in the VM, as described in Configure the proxy. The VM
Agent runs in this context and needs to be able to connect to Azure.
No user interaction is required to update the Azure VM Agent. The VM Agent is automatically updated, and
does not require a VM restart.
Linux
Use the following commands to install the VM Agent for Linux:
SUSE Linux Enterprise Server (SLES)

sudo zypper install WALinuxAgent

Red Hat Enterprise Linux (RHEL) or Oracle Linux

sudo yum install WALinuxAgent

If the agent is already installed, to update the Azure Linux Agent, do the steps described in Update the Azure
Linux Agent on a VM to the latest version from GitHub.
Configure the proxy
The steps you take to configure the proxy in Windows are different from the way you configure the proxy in
Linux.
Windows
Proxy settings must be set up correctly for the Local System account to access the Internet. If your proxy
settings are not set by Group Policy, you can configure the settings for the Local System account.
1. Go to Start, enter gpedit.msc, and then select Enter.
2. Select Computer Configuration > Administrative Templates > Windows Components > Internet
Explorer. Make sure that the setting Make proxy settings per-machine (rather than per-user) is
disabled or not configured.
3. In Control Panel, go to Network and Sharing Center > Internet Options.
4. On the Connections tab, select the LAN settings button.
5. Clear the Automatically detect settings check box.
6. Select the Use a proxy server for your LAN check box, and then enter the proxy address and port.
7. Select the Advanced button.
8. In the Exceptions box, enter the IP address 168.63.129.16. Select OK.
Linux
Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent, which is located at
\etc\waagent.conf.
Set the following parameters:
1. HTTP proxy host. For example, set it to proxy.corp.local.

HttpProxy.Host=<proxy host>

2. HTTP proxy port. For example, set it to 80.

HttpProxy.Port=<port of the proxy host>

3. Restart the agent.

sudo service waagent restart

The proxy settings in \etc\waagent.conf also apply to the required VM extensions. If you want to use the Azure
repositories, make sure that the traffic to these repositories is not going through your on-premises intranet. If
you created user-defined routes to enable forced tunneling, make sure that you add a route that routes traffic
to the repositories directly to the Internet, and not through your site-to-site VPN connection.
SLES
You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg. The following figure
shows an example:
RHEL
You also need to add routes for the IP addresses of the hosts listed in \etc\yum.repos.d\rhui-load-
balancers. For an example, see the preceding figure.
Oracle Linux
There are no repositories for Oracle Linux on Azure. You need to configure your own repositories for
Oracle Linux or use the public repositories.
For more information about user-defined routes, see User-defined routes and IP forwarding.
Configure the Azure Enhanced Monitoring Extension for SAP
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on Azure, the Azure VM
Agent is installed on the virtual machine. The next step is to deploy the Azure Enhanced Monitoring Extension
for SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more
information, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
You can use PowerShell or Azure CLI to install and configure the Azure Enhanced Monitoring Extension for
SAP. To install the extension on a Windows or Linux VM by using a Windows machine, see Azure PowerShell.
To install the extension on a Linux VM by using a Linux desktop, see Azure CLI.
Azure PowerShell for Linux and Windows VMs
To install the Azure Enhanced Monitoring Extension for SAP by using PowerShell:
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet. For more information,
see Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run
commandlet Get-AzureRmEnvironment . If you want to use global Azure, your environment is AzureCloud.
For Azure in China, select AzureChinaCloud.

$env = Get-AzureRmEnvironment -Name <name of the environment>


Login-AzureRmAccount -Environment $env
Set-AzureRmContext -SubscriptionName <subscription name>

Set-AzureRmVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>

After you enter your account data and identify the Azure virtual machine, the script deploys the required
extensions and enables the required features. This can take several minutes. For more information about
Set-AzureRmVMAEMExtension , see Set-AzureRmVMAEMExtension.

The Set-AzureRmVMAEMExtension configuration does all the steps to configure host monitoring for SAP.
The script output includes the following information:
Confirmation that monitoring for the OS disk and all additional data disks has been configured.
The next two messages confirm the configuration of Storage Metrics for a specific storage account.
One line of output gives the status of the actual update of the monitoring configuration.
Another line of output confirms that the configuration has been deployed or updated.
The last line of output is informational. It shows your options for testing the monitoring configuration.
To check that all steps of Azure Enhanced Monitoring have been executed successfully, and that the Azure
Infrastructure provides the necessary data, proceed with the readiness check for the Azure Enhanced
Monitoring Extension for SAP, as described in Readiness check for Azure Enhanced Monitoring for SAP.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.
Azure CLI for Linux VMs
To install the Azure Enhanced Monitoring Extension for SAP by using Azure CLI:
1. Install Azure CLI 1.0, as described in Install the Azure CLI 1.0.
2. Sign in with your Azure account:

azure login

3. Switch to Azure Resource Manager mode:

azure config mode arm


4. Enable Azure Enhanced Monitoring:

azure vm enable-aem <resource-group-name> <vm-name>

5. Verify that the Azure Enhanced Monitoring Extension is active on the Azure Linux VM. Check whether the
file \var\lib\AzureEnhancedMonitor\PerfCounters exists. If it exists, at a command prompt, run this
command to display information collected by the Azure Enhanced Monitor:

cat /var/lib/AzureEnhancedMonitor/PerfCounters

The output looks like this:

2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;

Checks and troubleshooting for end-to-end monitoring


After you have deployed your Azure VM and set up the relevant Azure monitoring infrastructure, check
whether all the components of the Azure Enhanced Monitoring Extension are working as expected.
Run the readiness check for the Azure Enhanced Monitoring Extension for SAP as described in Readiness check
for the Azure Enhanced Monitoring Extension for SAP. If all readiness check results are positive and all relevant
performance counters appear OK, Azure monitoring has been set up successfully. You can proceed with the
installation of SAP Host Agent as described in the SAP Notes in SAP resources. If the readiness check indicates
that counters are missing, run the health check for the Azure monitoring infrastructure, as described in Health
check for Azure monitoring infrastructure configuration. For more troubleshooting options, see
Troubleshooting Azure monitoring for SAP.
Readiness check for the Azure Enhanced Monitoring Extension for SAP
This check makes sure that all performance metrics that appear inside your SAP application are provided by the
underlying Azure monitoring infrastructure.
Run the readiness check on a Windows VM
1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a Command Prompt window.
3. At the command prompt, change the directory to the installation folder of the Azure Enhanced
Monitoring Extension for SAP:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop
The version in the path to the monitoring extension might vary. If you see folders for multiple versions
of the monitoring extension in the installation folder, check the configuration of the
AzureEnhancedMonitoring Windows service, and then switch to the folder indicated as Path to
executable.
4. At the command prompt, run azperflib.exe without any parameters.

NOTE
Azperflib.exe runs in a loop and updates the collected counters every 60 seconds. To end the loop, close the
Command Prompt window.

If the Azure Enhanced Monitoring Extension is not installed, or the AzureEnhancedMonitoring service is not
running, the extension has not been configured correctly. For detailed information about how to deploy the
extension, see Troubleshooting the Azure monitoring infrastructure for SAP.
C h e c k t h e o u t p u t o f a z p e r fl i b .e x e

Azperflib.exe output shows all populated Azure performance counters for SAP. At the bottom of the list of
collected counters, a summary and health indicator show the status of Azure monitoring.

Check the result returned for the Counters total output, which is reported as empty, and for Health status,
shown in the preceding figure.
Interpret the resulting values as follows:
AZPERFLIB.EXE RESULT VALUES AZURE MONITORING HEALTH STATUS

API Calls - not available Counters that are not available might be either not
applicable to the virtual machine configuration, or are errors.
See Health status.

Counters total - empty The following two Azure storage counters can be empty:
Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec
All other counters must have values.

Health status Only OK if return status shows OK.

Diagnostics Detailed information about health status.

If the Health status value is not OK, follow the instructions in Health check for Azure monitoring infrastructure
configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the Azure Enhanced Monitoring Extension.
a. Run more /var/lib/AzureEnhancedMonitor/PerfCounters

Expected result: Returns list of performance counters. The file should not be empty.
b. Run cat /var/lib/AzureEnhancedMonitor/PerfCounters | grep Error

Expected result: Returns one line where the error is none, for example,
3;config;Error;;0;0;none;0;1456416792;tst-servercs;
c. Run more /var/lib/AzureEnhancedMonitor/LatestErrorRecord

Expected result: Returns as empty or does not exist.


If the preceding check was not successful, run these additional checks:
1. Make sure that the waagent is installed and enabled.
a. Run sudo ls -al /var/lib/waagent/

Expected result: Lists the content of the waagent directory.


b. Run ps -ax | grep waagent

Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon

2. Make sure that the Azure Enhanced Monitoring Extension is installed and running.
a. Run sudo sh -c 'ls -al /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*/'

Expected result: Lists the content of the Azure Enhanced Monitoring Extension directory.
b. Run ps -ax | grep AzureEnhanced

Expected result: Displays one entry similar to:


python /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-2.0.0.2/handler.py
daemon

3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d

b. Run dump ccm

c. Check whether the Virtualization_Configuration\Enhanced Monitoring Access metric is true.


If you already have an SAP NetWeaver ABAP application server installed, open transaction ST06 and check
whether enhanced monitoring is enabled.
If any of these checks fail, and for detailed information about how to redeploy the extension, see
Troubleshooting the Azure monitoring infrastructure for SAP.
Health check for the Azure monitoring infrastructure configuration
If some of the monitoring data is not delivered correctly as indicated by the test described in Readiness check
for Azure Enhanced Monitoring for SAP, run the Test-AzureRmVMAEMExtension cmdlet to check whether the
Azure monitoring infrastructure and the monitoring extension for SAP are configured correctly.
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet, as described in
Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run the cmdlet
Get-AzureRmEnvironment . To use global Azure, select the AzureCloud environment. For Azure in China,
select AzureChinaCloud.

$env = Get-AzureRmEnvironment -Name <name of the environment>


Login-AzureRmAccount -Environment $env
Set-AzureRmContext -SubscriptionName <subscription name>
Test-AzureRmVMAEMExtension -ResourceGroupName <resource group name> -VMName <virtual machine name>

3. Enter your account data and identify the Azure virtual machine.
4. The script tests the configuration of the virtual machine you select.

Make sure that every health check result is OK. If some checks do not display OK, run the update cmdlet as
described in Configure the Azure Enhanced Monitoring Extension for SAP. Wait 15 minutes, and repeat the
checks described in Readiness check for Azure Enhanced Monitoring for SAP and Health check for Azure
Monitoring Infrastructure Configuration. If the checks still indicate a problem with some or all counters, see
Troubleshooting the Azure monitoring infrastructure for SAP.
Troubleshooting the Azure monitoring infrastructure for SAP
Azure performance counters do not show up at all
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. If the service has not
been installed correctly or if it is not running in your VM, no performance metrics can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e En h a n c e d M o n i t o r i n g Ex t e n si o n i s e m p t y
Issue
The installation directory
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop is empty.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might need to
restart the machine or rerun the Set-AzureRmVMAEMExtension configuration script.
Se r v i c e fo r A z u r e En h a n c e d M o n i t o r i n g d o e s n o t e x i st
Issue

The AzureEnhancedMonitoring Windows service does not exist.


Azperflib.exe output throws an error:

So l u t i o n

If the service does not exist, the Azure Enhanced Monitoring Extension for SAP has not been installed correctly.
Redeploy the extension by using the steps described for your deployment scenario in Deployment scenarios of
VMs for SAP in Azure.
After you deploy the extension, after one hour, check again whether the Azure performance counters are
provided in the Azure VM.
Se r v i c e fo r A z u r e En h a n c e d M o n i t o r i n g e x i st s, b u t fa i l s t o st a r t
Issue

The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start. For more information,
check the application event log.
So l u t i o n

The configuration is incorrect. Restart the monitoring extension for the VM, as described in Configure the Azure
Enhanced Monitoring Extension for SAP.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. The service gets data
from several sources. Some configuration data is collected locally, and some performance metrics are read
from Azure Diagnostics. Storage counters are used from your logging on the storage subscription level.
If troubleshooting by using SAP Note 1999351 doesn't resolve the issue, rerun the Set-AzureRmVMAEMExtension
configuration script. You might have to wait an hour because storage analytics or diagnostics counters might
not be created immediately after they are enabled. If the problem persists, open an SAP customer support
message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.

Azure performance counters do not show up at all


Performance metrics in Azure are collected by a daemon. If the daemon is not running, no performance metrics
can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e En h a n c e d M o n i t o r i n g e x t e n si o n i s e m p t y
Issue

The directory \var\lib\waagent\ does not have a subdirectory for the Azure Enhanced Monitoring extension.
So l u t i o n

The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might need to
restart the machine and/or rerun the Set-AzureRmVMAEMExtension configuration script.

Some Azure performance counters are missing


Performance metrics in Azure are collected by a daemon, which gets data from several sources. Some
configuration data is collected locally, and some performance metrics are read from Azure Diagnostics. Storage
counters come from the logs in your storage subscription.
For a complete and up-to-date list of known issues, see SAP Note 1999351, which has additional
troubleshooting information for Enhanced Azure Monitoring for SAP.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, rerun the
Set-AzureRmVMAEMExtension configuration script as described in Configure the Azure Enhanced Monitoring
Extension for SAP. You might have to wait for an hour because storage analytics or diagnostics counters might
not be created immediately after they are enabled. If the problem persists, open an SAP customer support
message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.
Azure Virtual Machines DBMS deployment for
SAP NetWeaver
8/21/2017 97 min to read Edit Online

NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic.
This article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.

This guide is part of the documentation on implementing and deploying the SAP software on Microsoft
Azure. Before reading this guide, read the Planning and Implementation Guide. This document covers the
deployment of various Relational Database Management Systems (RDBMS) and related products in
combination with SAP on Microsoft Azure Virtual Machines (VMs) using the Azure Infrastructure as a Service
(IaaS) capabilities.
The paper complements the SAP Installation Documentation and SAP Notes, which represent the primary
resources for installations and deployments of SAP software on given platforms.

General considerations
In this chapter, considerations of running SAP-related DBMS systems in Azure VMs are introduced. There are
few references to specific DBMS systems in this chapter. Instead the specific DBMS systems are handled
within this paper, after this chapter.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service.
PaaS: Platform as a Service.
SaaS: Software as a Service.
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or EP. SAP
components can be based on traditional ABAP or Java technologies or a non-NetWeaver based
application such as Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function such as
Development, QAS, Training, DR, or Production.
SAP Landscape: This refers to the entire SAP assets in a customers IT landscape. The SAP landscape
includes all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, SAP BW test system, SAP CRM production system, etc. In Azure deployments, it is
not supported to divide these two layers between on-premises and Azure. This means an SAP system is
either deployed on-premises or it is deployed in Azure. However, you can deploy the different systems of
an SAP landscape in Azure or on-premises. For example, you could deploy the SAP CRM development and
test systems in Azure but the SAP CRM production system on-premises.
Cloud-Only deployment: A deployment where the Azure subscription is not connected via a site-to-site or
ExpressRoute connection to the on-premises network infrastructure. In common Azure documentation
these kinds of deployments are also described as Cloud-Only deployments. Virtual Machines deployed
with this method are accessed through the Internet and public Internet endpoints assigned to the VMs in
Azure. The on-premises Active Directory (AD) and DNS is not extended to Azure in these types of
deployments. Hence the VMs are not part of the on-premises Active Directory. Note: Cloud-Only
deployments in this document are defined as complete SAP landscapes, which are running exclusively in
Azure without extension of Active Directory or name resolution from on-premises into public cloud.
Cloud-Only configurations are not supported for production SAP systems or configurations where SAP
STMS or other on-premises resources need to be used between SAP systems hosted on Azure and
resources residing on-premises.
Cross-Premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-
site, multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In
common Azure documentation, these kinds of deployments are also described as Cross-Premises
scenarios. The reason for the connection is to extend on-premises domains, on-premises Active Directory,
and on-premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the
subscription. Having this extension, the VMs can be part of the on-premises domain. Domain users of the
on-premises domain can access the servers and can run services on those VMs (like DBMS services).
Communication and name resolution between VMs deployed on-premises and VMs deployed in Azure is
possible. We expect this to be the most common scenario for deploying SAP assets on Azure. For more
information, see this article and this article.

NOTE
Cross-Premises deployments of SAP systems where Azure Virtual Machines running SAP systems are members of an
on-premises domain are supported for production SAP systems. Cross-Premises configurations are supported for
deploying parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure requires
having those VMs being part of on-premises domain and ADS. In former versions of the documentation, we talked
about Hybrid-IT scenarios, where the term Hybrid is rooted in the fact that there is a cross-premises connectivity
between on-premises and Azure. In this case Hybrid also means that the VMs in Azure are part of the on-premises
Active Directory.

Some Microsoft documentation describes Cross-Premises scenarios a bit differently, especially for DBMS HA
configurations. In the case of the SAP-related documents, the Cross-Premises scenario just boils down to
having a site-to-site or private (ExpressRoute) connectivity and to the fact that the SAP landscape is
distributed between on-premises and Azure.
Resources
The following guides are available for the topic of SAP deployments on Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver (this document)
The following SAP Notes are related to the topic of SAP on Azure:

NOTE NUMBER TITLE

1928533 SAP Applications on Azure: Supported Products and Azure


VM types

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure


NOTE NUMBER TITLE

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2039619 SAP Applications on Microsoft Azure using the Oracle


Database: Supported Products and Versions

2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux,
UNIX, and Windows - Additional Information

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

2069760 Oracle Linux 7.x SAP Installation and Upgrade

1597355 Swap-space recommendation for Linux

2171857 Oracle Database 12c - file system support on Linux

1114181 Oracle Database 11g - file system support on Linux

Also read the SCN Wiki that contains all SAP Notes for Linux.
You should have a working knowledge about the Microsoft Azure Architecture and how Microsoft Azure
Virtual Machines are deployed and operated. You can find more information at
https://azure.microsoft.com/documentation/

NOTE
We are not discussing Microsoft Azure Platform as a Service (PaaS) offerings of the Microsoft Azure Platform. This
paper is about running a database management system (DBMS) in Microsoft Azure Virtual Machines (IaaS) just as you
would run the DBMS in your on-premises environment. Database capabilities and functionalities between these two
offers are very different and should not be mixed up with each other. See also:
https://azure.microsoft.com/services/sql-database/

Since we are discussing IaaS, in general the Windows, Linux, and DBMS installation and configuration are
essentially the same as any virtual machine or bare metal machine you would install on-premises. However,
there are some architecture and system management implementation decisions, which are different when
utilizing IaaS. The purpose of this document is to explain the specific architectural and system management
differences that you must be prepared for when using IaaS.
In general, the overall areas of difference that this paper discusses are:
Planning the proper VM/disk layout of SAP systems to ensure you have the proper data file layout and can
achieve enough IOPS for your workload.
Networking considerations when using IaaS.
Specific database features to use in order to optimize the database layout.
Backup and restore considerations in IaaS.
Utilizing different types of images for deployment.
High Availability in Azure IaaS.

Structure of an RDBMS Deployment


In order to follow this chapter, it is necessary to understand what was presented in this chapter of the
Deployment Guide. Knowledge about the different VM-Series and their differences and differences of Azure
Standard and Premium Storage should be understood and known before reading this chapter.
Until March 2015, disks, which contain an operating system were limited to 127 GB in size. This limitation got
lifted in March 2015 (for more information check https://azure.microsoft.com/blog/2015/03/25/azure-vm-
os-drive-limit-octupled/). From there on disks containing the operating system can have the same size as any
other disk. Nevertheless, we still prefer a structure of deployment where the operating system, DBMS, and
eventual SAP binaries are separate from the database files. Therefore, we expect SAP systems running in
Azure Virtual Machines have the base VM (or disk) installed with the operating system, database
management system executables, and SAP executables. The DBMS data and log files are stored in Azure
Storage (Standard or Premium Storage) in separate disks and attached as logical disks to the original Azure
operating system image VM.
Dependent on leveraging Azure Standard or Premium Storage (for example by using the DS-series or GS-
series VMs) there are other quotas in Azure, which are documented here (Linux) and here (Windows). When
planning your disk layout, you need to find the best balance of the quotas for the following items:
The number of data files.
The number of disks that contain the files.
The IOPS quotas of a single disk.
The data throughput per disk.
The number of additional data disks possible per VM size.
The overall storage throughput a VM can provide.
Azure enforces an IOPS quota per data disk. These quotas are different for disks hosted on Azure Standard
Storage and Premium Storage. I/O latencies are also very different between the two storage types with
Premium Storage delivering factors better I/O latencies. Each of the different VM types has a limited number
of data disks that you are able to attach. Another restriction is that only certain VM types can leverage Azure
Premium Storage. This means the decision for a certain VM type might not only be driven by the CPU and
memory requirements, but also by the IOPS, latency and disk throughput requirements that usually are
scaled with the number of disks or the type of Premium Storage disks. Especially with Premium Storage the
size of a disk also might be dictated by the number of IOPS and throughput that needs to be achieved by
each disk.
The fact that the overall IOPS rate, the number of disks mounted, and the size of the VM are all tied together,
might cause an Azure configuration of an SAP system to be different than its on-premises deployment. The
IOPS limits per LUN are usually configurable in on-premises deployments. Whereas with Azure Storage those
limits are fixed or as in Premium Storage dependent on the disk type. So with on-premises deployments we
see customer configurations of database servers that are using many different volumes for special
executables like SAP and the DBMS or special volumes for temporary databases or table spaces. When such
an on-premises system is moved to Azure, it might lead to a waste of potential IOPS bandwidth by wasting a
disk for executables or databases, which do not perform any or not a lot of IOPS. Therefore, in Azure VMs we
recommend that the DBMS and SAP executables be installed on the OS disk if possible.
The placement of the database files and log files and the type of Azure Storage used, should be defined by
IOPS, latency, and throughput requirements. In order to have enough IOPS for the transaction log, you might
be forced to leverage multiple disks for the transaction log file or use a larger Premium Storage disk. In such
a case one would build a software RAID (for example Windows Storage Pool for Windows or MDADM and
LVM (Logical Volume Manager) for Linux) with the disks, which contain the transaction log.
Windows
Drive D:\ in an Azure VM is a non-persisted drive, which is backed by some local disks on the Azure
compute node. Because it is non-persisted, this means that any changes made to the content on the D:\
drive is lost when the VM is rebooted. By "any changes", we mean saved files, directories created,
applications installed, etc.

Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that is a non-persisted drive backed by
local disks on the Azure compute node. Because it is non-persisted, this means that any changes made to
content in /mnt/resource are lost when the VM is rebooted. By any changes, we mean files saved,
directories created, applications installed, etc.

Dependent on the Azure VM-series, the local disks on the compute node show different performance, which
can be categorized like:
A0-A7: Very limited performance. Not usable for anything beyond windows page file
A8-A11: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
D-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
DS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec
throughput
G-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
GS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec
throughput
Statements above are applying to the VM types that are certified with SAP. The VM-series with excellent IOPS
and throughput qualify for leverage by some DBMS features, like tempdb or temporary table space.
Caching for VMs and data disks
When we create data disks through the portal or when we mount uploaded disks to VMs, we can choose
whether the I/O traffic between the VM and those disks located in Azure storage are cached. Azure Standard
and Premium Storage use two different technologies for this type of cache. In both cases, the cache itself
would be disk backed on the same drives used by the temporary disk (D:\ on Windows or /mnt/resource on
Linux) of the VM.
For Azure Standard Storage the possible cache types are:
No caching
Read caching
Read and Write caching
In order to get consistent and deterministic performance, you should set the caching on Azure Standard
Storage for all disks containing DBMS-related data files, log files, and table space to 'NONE'. The
caching of the VM can remain with the default.
For Azure Premium Storage the following caching options exist:
No caching
Read caching
Recommendation for Azure Premium Storage is to leverage Read caching for data files of the SAP
database and chose No caching for the disks of log file(s).
Software RAID
As already stated above, you need to balance the number of IOPS needed for the database files across the
number of disks you can configure and the maximum IOPS an Azure VM provides per disk or Premium
Storage disk type. Easiest way to deal with the IOPS load over disks is to build a software RAID over the
different disks. Then place a number of data files of the SAP DBMS on the LUNS carved out of the software
RAID. Dependent on the requirements you might want to consider the usage of Premium Storage as well
since two of the three different Premium Storage disks provide higher IOPS quota than disks based on
Standard Storage. Besides the significant better I/O latency provided by Azure Premium Storage.
Same applies to the transaction log of the different DBMS systems. With many of them just adding more Tlog
files does not help since the DBMS systems write into one of the files at a time only. If higher IOPS rates are
needed than a single Standard Storage based disk can deliver, you can stripe over multiple Standard Storage
disks or you can use a larger Premium Storage disk type that beyond higher IOPS rates also delivers factors
lower latency for the write I/Os into the transaction log.
Situations experienced in Azure deployments, which would favor using a software RAID are:
Transaction Log/Redo Log require more IOPS than Azure provides for a single disk. As mentioned above
this can be solved by building a LUN over multiple disks using a software RAID.
Uneven I/O workload distribution over the different data files of the SAP database. In such cases one can
experience one data file hitting the quota rather often. Whereas other data files are not even getting close
to the IOPS quota of a single disk. In such cases the easiest solution is to build one LUN over multiple
disks using a software RAID.
You dont know what the exact I/O workload per data file is and only roughly know what the overall IOPS
workload against the DBMS is. Easiest to do is to build one LUN with the help of a software RAID. The sum
of quotas of multiple disks behind this LUN should then fulfill the known IOPS rate.

Windows
We recommend using Windows Storage Spaces if you run on Windows Server 2012 or higher. It is more
efficient than Windows Striping of earlier Windows versions. You might need to create the Windows
Storage Pools and Storage Spaces by PowerShell commands when using Windows Server 2012 as
Operating System. The PowerShell commands can be found here
https://technet.microsoft.com/library/jj851254.aspx

Linux
Only MDADM and LVM (Logical Volume Manager) are supported to build a software RAID on Linux. For
more information, read the following articles:
Configure Software RAID on Linux (for MDADM)
Configure LVM on a Linux VM in Azure

Considerations for leveraging VM-series, which are able to work with Azure Premium Storage usually are:
Demands for I/O latencies that are close to what SAN/NAS devices deliver.
Demand for factors better I/O latency than Azure Standard Storage can deliver.
Higher IOPS per VM than what could be achieved with multiple Standard Storage disks against a certain
VM type.
Since the underlying Azure Storage replicates each disk to at least three storage nodes, simple RAID 0
striping can be used. There is no need to implement RAID5 or RAID1.
Microsoft Azure Storage
Microsoft Azure Storage stores the base VM (with OS) and disks or BLOBs to at least three separate storage
nodes. When creating a storage account or managed disk, there is a choice of protection as shown here:

Azure Storage Local Replication (Locally Redundant) provides levels of protection against data loss due to
infrastructure failure that few customers could afford to deploy. As shown above there are four different
options with a fifth being a variation of one of the first three. Looking closer at them we can distinguish:
Premium Locally Redundant Storage (LRS): Azure Premium Storage delivers high-performance, low-
latency disk support for virtual machines running I/O-intensive workloads. There are three replicas of the
data within the same Azure datacenter of an Azure region. The copies are in different Fault and Upgrade
Domains (for concepts see this chapter in the Planning Guide). In case of a replica of the data going out of
service due to a storage node failure or disk failure, a new replica is generated automatically.
Locally Redundant Storage (LRS): In this case, there are three replicas of the data within the same Azure
datacenter of an Azure region. The copies are in different Fault and Upgrade Domains (for concepts see
this chapter in the Planning Guide). In case of a replica of the data going out of service due to a storage
node failure or disk failure, a new replica is generated automatically.
Geo Redundant Storage (GRS): In this case, there is an asynchronous replication that feeds an additional
three replicas of the data in another Azure Region, which is in most of the cases in the same geographical
region (like North Europe and West Europe). This results in three additional replicas, so that there are six
replicas in sum. A variation of this is an addition where the data in the geo replicated Azure region can be
used for read purposes (Read-Access Geo-Redundant).
Zone Redundant Storage (ZRS): In this case, the three replicas of the data remain in the same Azure
Region. As explained in this chapter of the Planning Guide an Azure region can be a number of
datacenters in close proximity. In the case of LRS the replicas would be distributed over the different
datacenters that make one Azure region.
More information can be found here.
NOTE
For DBMS deployments, the usage of Geo Redundant Storage is not recommended
Azure Storage Geo-Replication is asynchronous. Replication of individual disks mounted to a single VM are not
synchronized in lock step. Therefore, it is not suitable to replicate DBMS files that are distributed over different disks or
deployed against a software RAID based on multiple disks. DBMS software requires that the persistent disk storage is
precisely synchronized across different LUNs and underlying disks/spindles. DBMS software uses various mechanisms
to sequence IO write activities and a DBMS reports that the disk storage targeted by the replication is corrupted if
these vary even by a few milliseconds. Hence if one really wants a database configuration with a database stretched
across multiple disks geo-replicated, such a replication needs to be performed with database means and functionality.
One should not rely on Azure Storage Geo-Replication to perform this job.
The problem is simplest to explain with an example system. Lets assume you have an SAP system uploaded into
Azure, which has eight disks containing data files of the DBMS plus one disk containing the transaction log file. Each
one of these nine disks have data written to them in a consistent method according to the DBMS, whether the data is
being written to the data or transaction log files.
In order to properly geo-replicate the data and maintain a consistent database image, the content of all nine disks
would have to be geo-replicated in the exact order the I/O operations were executed against the nine different disks.
However, Azure Storage geo-replication does not allow to declare dependencies between disks. This means Microsoft
Azure Storage geo-replication doesnt know about the fact that the contents in these nine different disks are related to
each other and that the data changes are consistent only when replicating in the order the I/O operations happened
across all the nine disks.
Besides chances being high that the geo-replicated images in the scenario do not provide a consistent database image,
there also is a performance penalty that shows up with geo redundant storage that can severely impact performance.
In summary, do not use this type of storage redundancy for DBMS type workloads.

Mapping VHDs into Azure Virtual Machine Service Storage Accounts


This chapter only applies to Azure Storage Accounts. If you plan to use Managed Disks, the limitations
mentioned in this chapter do not apply. For more information about Managed Disks, read chapter Managed
Disks of this guide.
An Azure Storage Account is not only an administrative construct, but also a subject of limitations. Whereas
the limitations vary on whether we talk about an Azure Standard Storage Account or an Azure Premium
Storage Account. The exact capabilities and limitations are listed here
So for Azure Standard Storage it is important to note there is a limit on the IOPS per storage account (Row
containing Total Request Rate in the article). In addition, there is an initial limit of 100 Storage Accounts per
Azure subscription (as of July 2015). Therefore, it is recommended to balance IOPS of VMs between multiple
storage accounts when using Azure Standard Storage. Whereas a single VM ideally uses one storage account
if possible. So if we talk about DBMS deployments where each VHD that is hosted on Azure Standard Storage
could reach its quota limit, you should only deploy 30-40 VHDs per Azure Storage Account that uses Azure
Standard Storage. On the other hand, if you leverage Azure Premium Storage and want to store large
database volumes, you might be fine in terms of IOPS. But an Azure Premium Storage Account is way more
restrictive in data volume than an Azure Standard Storage Account. As a result, you can only deploy a limited
number of VHDs within an Azure Premium Storage Account before hitting the data volume limit. At the end,
think of an Azure Storage Account as a Virtual SAN that has limited capabilities in IOPS and/or capacity. As
a result, the task remains, as in on-premises deployments, to define the layout of the VHDs of the different
SAP systems over the different imaginary SAN devices or Azure Storage Accounts.
For Azure Standard Storage, it is not recommended to present storage from different storage accounts to a
single VM if possible.
When using the DS or GS-series of Azure VMs, it is possible to mount VHDs out of Azure Standard Storage
Accounts and Premium Storage Accounts. Use cases like writing backups into Standard Storage backed VHDs
and having DBMS data and log files on Premium Storage come to mind where such heterogeneous storage
could be leveraged.
Based on customer deployments and testing around 30 to 40 VHDs containing database data files and log
files can be provisioned on a single Azure Standard Storage Account with acceptable performance. As
mentioned earlier, the limitation of an Azure Premium Storage Account is likely to be the data capacity it can
hold and not IOPS.
As with SAN devices on-premises, sharing requires some monitoring in order to eventually detect
bottlenecks on an Azure Storage Account. The Azure Monitoring Extension for SAP and the Azure portal are
tools that can be used to detect busy Azure Storage Accounts that may be delivering suboptimal IO
performance. If this situation is detected, it is recommended to move busy VMs to another Azure Storage
Account. Refer to the Deployment Guide for details on how to activate the SAP host monitoring capabilities.
Another article summarizing best practices around Azure Standard Storage and Azure Standard Storage
Accounts can be found here https://blogs.msdn.com/b/mast/archive/2014/10/14/configuring-azure-virtual-
machines-for-optimal-storage-performance.aspx
Managed Disks
Managed Disks are a new resource type in Azure Resource Manager that can be used instead of VHDs that
are stored in Azure Storage Accounts. Managed Disks automatically align with the Availability Set of the
virtual machine they are attached to and therefore increase the availability of your virtual machine and the
services that are running on the virtual machine. To learn more, read the overview article.
SAP currently only supports Premium Managed Disks. Read SAP Note 1928533 for more details.
Moving deployed DBMS VMs from Azure Standard Storage to Azure Premium Storage
We encounter quite some scenarios where you as customer want to move a deployed VM from Azure
Standard Storage into Azure Premium Storage. If your disks are stored in Azure Storage Accounts, this is not
possible without physically moving the data. There are several ways to achieve the goal:
You could simply copy all VHDs, base VHD as well as data VHDs into a new Azure Premium Storage
Account. Often you chose the number of VHDs in Azure Standard Storage not because of the fact that you
needed the data volume. But you needed that many VHDs because of the IOPS. Now that you move to
Azure Premium Storage you could go with way fewer VHDs to achieve the same IOPS throughput. Given
the fact that in Azure Standard Storage you pay for the used data and not the nominal disk size, the
number of VHDs did not really matter in terms of costs. However, with Azure Premium Storage, you
would pay for the nominal disk size. Therefore, most of the customers try to keep the number of Azure
VHDs in Premium Storage at the number needed to achieve the IOPS throughput necessary. So, most
customers decide against the way of a simple 1:1 copy.
If not yet mounted, you mount a single VHD that can contain a database backup of your SAP database.
After the backup, you unmount all VHDs including the VHD containing the backup and copy the base VHD
and the VHD with the backup into an Azure Premium Storage account. You would then deploy the VM
based on the base VHD and mount the VHD with the backup. Now you create additional empty Premium
Storage Disks for the VM that are used to restore the database into. This assumes that the DBMS allows
you to change paths to the data and log files as part of the restore process.
Another possibility is a variation of the former process, where you just copy the backup VHD into Azure
Premium Storage and attach it against a VM that you newly deployed and installed.
The fourth possibility you would choose when you are in need to change the number of data files of your
database. In such a case, you would perform an SAP homogenous system copy using export/import. Put
those export files into a VHD that is copied into an Azure Premium Storage Account and attach it to a VM
that you use to run the import processes. Customers use this possibility mainly when they want to
decrease the number of data files.
If you use Managed Disks, you can migrate to Premium Storage by:
1. Deallocate the virtual machine
2. If necessary, resize the virtual machine to a size that supports Premium Storage (for example DS or GS)
3. Change the Managed Disk account type to Premium (SSD)
4. Start your virtual machine
Deployment of VMs for SAP in Azure
Microsoft Azure offers multiple ways to deploy VMs and associated disks. Thereby it is important to
understand the differences since preparations of the VMs might differ dependent on the way of deployment.
In general, we look into the scenarios described in the following chapters.
Deploying a VM from the Azure Marketplace
You like to take a Microsoft or third party provided image from the Azure Marketplace to deploy your VM.
After you deployed your VM in Azure, you follow the same guidelines and tools to install the SAP software
inside your VM as you would do in an on-premises environment. For installing the SAP software inside the
Azure VM, SAP and Microsoft recommend uploading and store the SAP installation media in disks or to
create an Azure VM working as a 'File server', which contains all the necessary SAP installation media.
Deploying a VM with a customer-specific generalized image
Due to specific patch requirements regarding your OS or DBMS version, the provided images in the Azure
Marketplace might not fit your needs. Therefore, you might need to create a VM using your own 'private'
OS/DBMS VM image, which can be deployed several times afterwards. To prepare such a 'private' image for
duplication, the OS must be generalized on the on-premises VM. Refer to the Deployment Guide for details
on how to generalize a VM.
If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you can
adapt the SAP system settings after the deployment of the Azure VM through the instance rename procedure
supported by the SAP Software Provisioning Manager (SAP Note 1619720). Otherwise you can install the
SAP software later after the deployment of the Azure VM.
As of the database content used by the SAP application, you can either generate the content freshly by an SAP
installation or you can import your content into Azure by using a VHD with a DBMS database backup or by
leveraging capabilities of the DBMS to directly backup into Microsoft Azure Storage. In this case, you could
also prepare VHDs with the DBMS data and log files on-premises and then import those as Disks into Azure.
But the transfer of DBMS data, which is getting loaded from on-premises to Azure would work over VHD
disks that need to be prepared on-premises.
Moving a VM from on-premises to Azure with a non-generalized disk
You plan to move a specific SAP system from on-premises to Azure (lift and shift). This can be done by
uploading the disk, which contains the OS, the SAP binaries, and eventual DBMS binaries plus the disks with
the data and log files of the DBMS to Azure. In opposite to scenario #2 above, you keep the hostname, SAP
SID, and SAP user accounts in the Azure VM as they were configured in the on-premises environment.
Therefore, generalizing the image is not necessary. This case mostly applies for Cross-Premises scenarios
where a part of the SAP landscape is run on-premises and parts on Azure.

High Availability and Disaster Recovery with Azure VMs


Azure offers the following High Availability (HA) and Disaster Recovery (DR) functionalities, which apply to
different components we would use for SAP and DBMS deployments
VMs deployed on Azure Nodes
The Azure Platform does not offer features such as Live Migration for deployed VMs. This means if there is
maintenance necessary on a server cluster on which a VM is deployed, the VM needs to get stopped and
restarted. Maintenance in Azure is performed using so called Upgrade Domains within clusters of servers.
Only one Upgrade Domain at a time is being maintained. During such a restart, there is an interruption of
service while the VM is shut down, maintenance is performed and VM restarted. Most DBMS vendors
however provide High Availability and Disaster Recovery functionality that quickly restarts the DBMS services
on another node if the primary node is unavailable. The Azure Platform offers functionality to distribute VMs,
Storage, and other Azure services across Upgrade Domains to ensure that planned maintenance or
infrastructure failures would only impact a small subset of VMs or services. With careful planning, it is
possible to achieve availability levels comparable to on-premises infrastructures.
Microsoft Azure Availability Sets are a logical grouping of VMs or Services that ensures VMs and other
services are distributed to different Fault and Upgrade Domains within a cluster such that there would only
be one node shutdown at any one point in time (read this (Linux) or this (Windows) article for more details).
It needs to be configured by purpose when rolling out VMs as seen here:

If we want to create highly available configurations of DBMS deployments (independent of the individual
DBMS HA functionality used), the DBMS VMs would need to:
Add the VMs to the same Azure Virtual Network
(https://azure.microsoft.com/documentation/services/virtual-network/)
The VMs of the HA configuration should also be in the same subnet. Name resolution between the
different subnets is not possible in Cloud-Only deployments, only IP resolution works. Using site-to-site or
ExpressRoute connectivity for Cross-Premises deployments, a network with at least one subnet is already
established. Name resolution is done according to the on-premises AD policies and network
infrastructure.
IP Addresses
It is highly recommended to setup the VMs for HA configurations in a resilient way. Relying on IP addresses
to address the HA partner(s) within the HA configuration is not reliable in Azure unless static IP addresses are
used. There are two "Shutdown" concepts in Azure:
Shut down through Azure portal or Azure PowerShell cmdlet Stop-AzureRmVM: In this case, the Virtual
Machine gets shutdown and de-allocated. Your Azure account is no longer charged for this VM so the only
charges that incur are for the storage used. However, if the private IP address of the network interface was
not static, the IP address is released and it is not guaranteed that the network interface gets the old IP
address assigned again after a restart of the VM. Performing the shut down through the Azure portal or by
calling Stop-AzureRmVM automatically causes de-allocation. If you do not want to deallocate the machine
use Stop-AzureRmVM -StayProvisioned
If you shut down the VM from an OS level, the VM gets shut down and NOT de-allocated. However, in this
case, your Azure account is still charged for the VM, despite the fact that it is shutdown. In such a case, the
assignment of the IP address to a stopped VM remains intact. Shutting down the VM from within does not
automatically force de-allocation.
Even for Cross-Premises scenarios, by default a shutdown and de-allocation means de-assignment of the IP
addresses from the VM, even if on-premises policies in DHCP settings are different.
The exception is if one assigns a static IP address to a network interface as described here.
In such a case the IP address remains fixed as long as the network interface is not deleted.

IMPORTANT
In order to keep the whole deployment simple and manageable, the clear recommendation is to setup the VMs
partnering in a DBMS HA or DR configuration within Azure in a way that there is a functioning name resolution
between the different VMs involved.

Deployment of Host Monitoring


For productive usage of SAP Applications in Azure Virtual Machines, SAP requires the ability to get host
monitoring data from the physical hosts running the Azure Virtual Machines. A specific SAP Host Agent patch
level is required that enables this capability in SAPOSCOL and SAP Host Agent. The exact patch level is
documented in SAP Note 1409604.
For the details regarding deployment of components that deliver host data to SAPOSCOL and SAP Host
Agent and the lifecycle management of those components, refer to the Deployment Guide

Specifics to Microsoft SQL Server


SQL Server IaaS
Starting with Microsoft Azure, you can easily migrate your existing SQL Server applications built on Windows
Server platform to Azure Virtual Machines. SQL Server in a Virtual Machine enables you to reduce the total
cost of ownership of deployment, management, and maintenance of enterprise breadth applications by easily
migrating these applications to Microsoft Azure. With SQL Server in an Azure Virtual Machine, administrators
and developers can still use the same development and administration tools that are available on-premises.

IMPORTANT
We are not discussing Microsoft Azure SQL Database, which is a Platform as a Service offer of the Microsoft Azure
Platform. The discussion in this paper is about running the SQL Server product as it is known for on-premises
deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure. Database
capabilities and functionalities between these two offers are different and should not be mixed up with each other. See
also: https://azure.microsoft.com/services/sql-database/

It is strongly recommended to review this documentation before continuing.


In the following sections pieces of parts of the documentation under the link above are aggregated and
mentioned. Specifics around SAP are mentioned as well and some concepts are described in more detail.
However, it is highly recommended to work through the documentation above first before reading the SQL
Server specific documentation.
There is some SQL Server in IaaS specific information you should know before continuing:
Virtual Machine SLA: There is an SLA for Virtual Machines running in Azure, which can be found here:
https://azure.microsoft.com/support/legal/sla/
SQL Version Support: For SAP customers, we support SQL Server 2008 R2 and higher on Microsoft
Azure Virtual Machine. Earlier editions are not supported. Review this general Support Statement for more
details. Note that in general SQL Server 2008 is supported by Microsoft as well. However due to
significant functionality for SAP, which was introduced with SQL Server 2008 R2, SQL Server 2008 R2 is
the minimum release for SAP. Keep in mind that SQL Server 2012 and 2014 got extended with deeper
integration into the IaaS scenario (like backing up directly against Azure Storage). Therefore, we restrict
this paper to SQL Server 2012 and 2014 with its latest patch level for Azure.
SQL Feature Support: Most SQL Server features are supported on Microsoft Azure Virtual Machines with
some exceptions. SQL Server Failover Clustering using Shared Disks is not supported. Distributed
technologies like Database Mirroring, AlwaysOn Availability Groups, Replication, Log Shipping and Service
Broker are supported within a single Azure Region. SQL Server AlwaysOn also is supported between
different Azure Regions as documented here:
https://blogs.technet.com/b/dataplatforminsider/archive/2014/06/19/sql-server-alwayson-availability-
groups-supported-between-microsoft-azure-regions.aspx. Review the Support Statement for more details.
An example on how to deploy an AlwaysOn configuration is shown in this article. Also, check out the Best
Practices documented here
SQL Performance: We are confident that Microsoft Azure hosted Virtual Machines perform very well in
comparison to other public cloud virtualization offerings, but individual results may vary. Check out this
article.
Using Images from Azure Marketplace: The fastest way to deploy a new Microsoft Azure VM is to use
an image from the Azure Marketplace. There are images in the Azure Marketplace, which contain SQL
Server. The images where SQL Server already is installed cant be immediately used for SAP NetWeaver
applications. The reason is the default SQL Server collation is installed within those images and not the
collation required by SAP NetWeaver systems. In order to use such images, check the steps documented
in chapter Using a SQL Server image out of the Microsoft Azure Marketplace.
Check out Pricing Details for more information. The SQL Server 2012 Licensing Guide and SQL Server
2014 Licensing Guide are also an important resource.
SQL Server configuration guidelines for SAP-related SQL Server installations in Azure VMs
Recommendations on VM/VHD structure for SAP-related SQL Server deployments
In accordance with the general description, SQL Server executables should be located or installed into the
system drive of the VMs OS disk (drive C:). Typically, most of the SQL Server system databases are not
utilized at a high level by SAP NetWeaver workload. Hence the system databases of SQL Server (master,
msdb, and model) can remain on the C:\ drive as well. An exception could be tempdb, which in the case of
some SAP ERP and all BW workloads, might require either higher data volume or I/O operations volume,
which cant fit into the original VM. For such systems, the following steps should be performed:
Move the primary tempdb data file(s) to the same logical drive as the primary data file(s) of the SAP
database.
Add any additional tempdb data files to each of the other logical drives containing a data file of the SAP
user database.
Add the tempdb logfile to the logical drive, which contains the user databases log file.
Exclusively for VM types that use local SSDs on the compute node tempdb data and log files might be
placed on the D:\ drive. Nevertheless, it might be recommended to use multiple tempdb data files. Be
aware D:\ drive volumes are different based on the VM type.
These configurations enable tempdb to consume more space than the system drive is able to provide. In
order to determine the proper tempdb size, one can check the tempdb sizes on existing systems, which run
on-premises. In addition, such a configuration would enable IOPS numbers against tempdb, which cannot be
provided with the system drive. Again, systems that are running on-premises can be used to monitor I/O
workload against tempdb so that you can derive the IOPS numbers you expect to see on your tempdb.
A VM configuration, which runs SQL Server with an SAP database and where tempdb data and tempdb
logfile are placed on the D:\ drive would look like:

Be aware that the D:\ drive has different sizes dependent on the VM type. Dependent on the size requirement
of tempdb you might be forced to pair tempdb data and log files with the SAP database data and log files in
cases where D:\ drive is too small.
Formatting the disks
For SQL Server the NTFS block size for disks containing SQL Server data and log files should be 64K. There is
no need to format the D:\ drive. This drive comes pre-formatted.
In order to make sure that the restore or creation of databases is not initializing the data files by zeroing the
content of the files, one should make sure that the user context the SQL Server service is running in has a
certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL
Server service is run in the user context of non-Windows Administrator user, you need to assign that user the
User Right Perform volume maintenance tasks. See the details in this Microsoft Knowledge Base Article:
https://support.microsoft.com/kb/2574695
Impact of database compression
In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS
might help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done,
applying SQL Server PAGE compression is strongly recommended by both SAP and Microsoft before
uploading an existing SAP database to Azure.
The recommendation to perform Database Compression before uploading to Azure is given out of two
reasons:
The amount of data to be uploaded is lower.
The duration of the compression execution is shorter assuming that one can use stronger hardware with
more CPUs or higher I/O bandwidth or less I/O latency on-premises.
Smaller database sizes might lead to less costs for disk allocation
Database compression works as well in an Azure Virtual Machines as it does on-premises. For more details
on how to compress an existing SAP SQL Server database, check here:
https://blogs.msdn.com/b/saponsqlserver/archive/2010/10/08/compressing-an-sap-database-using-report-
msscompress.aspx
SQL Server 2014 Storing Database Files directly on Azure Blob Storage
SQL Server 2014 opens the possibility to store database files directly on Azure Blob Store without the
wrapper of a VHD around them. Especially with using Standard Azure Storage or smaller VM types this
enables scenarios where you can overcome the limits of IOPS that would be enforced by a limited number of
disks that can be mounted to some smaller VM types. This works for user databases however not for system
databases of SQL Server. It also works for data and log files of SQL Server. If youd like to deploy an SAP SQL
Server database this way instead of wrapping it into VHDs, keep the following in mind:
The Storage Account used needs to be in the same Azure Region as the one that is used to deploy the VM
SQL Server is running in.
Considerations listed earlier regarding the distribution of VHDs over different Azure Storage Accounts
apply for this method of deployments as well. Means the I/O operations count against the limits of the
Azure Storage Account.
Details about this type of deployment are listed here: https://docs.microsoft.com/sql/relational-
databases/databases/sql-server-data-files-in-microsoft-azure
In order to store SQL Server data files directly on Azure Premium Storage, you need to have a minimum SQL
Server 2014 patch release, which is documented here: https://support.microsoft.com/kb/3063054. Storing
SQL Server data files on Azure Standard Storage does work with the released version of SQL Server 2014.
However, the very same patches contain another series of fixes, which make the direct usage of Azure Blob
Storage for SQL Server data files and backups more reliable. Therefore we recommend using these patches in
general.
SQL Server 2014 Buffer Pool Extension
SQL Server 2014 introduced a new feature, which is called Buffer Pool Extension. This functionality extends
the buffer pool of SQL Server, which is kept in memory with a second level cache that is backed by local SSDs
of a server or VM. This enables to keep a larger working set of data in memory. Compared to accessing
Azure Standard Storage the access into the extension of the buffer pool, which is stored on local SSDs of an
Azure VM is many factors faster. Therefore, leveraging the local D:\ drive of the VM types that have excellent
IOPS and throughput could be a very reasonable way to reduce the IOPS load against Azure Storage and
improve response times of queries dramatically. This applies especially when not using Premium Storage. In
case of Premium Storage and the usage of the Premium Azure Read Cache on the compute node, as
recommended for data files, no significant differences are expected. Reason is that both caches (SQL Server
Buffer Pool Extension and Premium Storage Read Cache) are using the local disks of the compute nodes. For
more details about this functionality, check this documentation: https://docs.microsoft.com/sql/database-
engine/configure-windows/buffer-pool-extension
Backup/Recovery considerations for SQL Server
When deploying SQL Server into Azure your backup methodology must be reviewed. Even if the system is
not a productive system, the SAP database hosted by SQL Server must be backed up periodically. Since Azure
Storage keeps three images, a backup is now less important in respect to compensating a storage crash. The
priority reason for maintaining a proper backup and recovery plan is more that you can compensate for
logical/manual errors by providing point in time recovery capabilities. So the goal is to either use backups to
restore the database back to a certain point in time or to use the backups in Azure to seed another system by
copying the existing database. For example, you could transfer from a 2-Tier SAP configuration to a 3-Tier
system setup of the same system by restoring a backup.
There are three different ways to backup SQL Server to Azure Storage:
1. SQL Server 2012 CU4 and higher can natively backup databases to a URL. This is detailed in the blog New
functionality in SQL Server 2014 Part 5 Backup/Restore Enhancements. See chapter SQL Server 2012
SP1 CU4 and later.
2. SQL Server releases prior to SQL 2012 CU4 can use a redirection functionality to backup to a VHD and
basically move the write stream towards an Azure Storage location that has been configured. See chapter
SQL Server 2012 SP1 CU3 and earlier releases.
3. The final method is to perform a conventional SQL Server backup to disk command onto a disk device.
This is identical to the on-premises deployment pattern and is not discussed in detail in this document.
SQL Server 2012 SP1 CU4 and later
This functionality allows you to directly backup to Azure BLOB storage. Without this method, you must
backup to other disks, which would consume disk and IOPS capacity. The idea is basically this:

The advantage in this case is that one doesnt need to spend disks to store SQL Server backups on. So you
have fewer disks allocated and the whole bandwidth of disk IOPS can be used for data and log files. Note that
the maximum size of a backup is limited to a maximum of 1 TB as documented in the section Limitations in
this article: https://docs.microsoft.com/sql/relational-databases/backup-restore/sql-server-backup-to-
url#limitations. If the backup size, despite using SQL Server Backup compression would exceed 1 TB in size,
the functionality described in chapter SQL Server 2012 SP1 CU3 and earlier releases in this document needs