Академический Документы
Профессиональный Документы
Культура Документы
Practices Guide
THIS PAGE INTENTIONALLY LEFT BLANK.
Contents
INTRODUCTION / 4 HANA SNAP CLONE / 28
Restoring HANA via Commvault® GUI / 20 Commands for starting and stopping HANA
replication / 38
HANA Database Copy / 21
Commvault GUI / 21
Backup / 26
Restore / 28
INTRODUCTION
SAP HANA® is a new and innovative in-memory database platform and SAP’s answer to the
question on how to analyse Big Data in near real time. HANA takes full advantage of all-new
hardware technologies by combining columnar data storage, massively parallel processing of
modern CPUs and in-memory computing. This puts HANA in a position to be able to support
numerous use cases in the real-time analytics space. Above that, SAP HANA can also be deployed
as the underlying database of SAP Business Suite® applications.
A HANA database is identified by a unique SID. Up to four so-called servers (depending on HANA
SPS level) form a HANA database. Each of these servers has its own data and log volume. In case
of increasing workloads, a HANA database can be scaled up by adding more memory and CPUs to
the local node it’s running on.
HANA SID
NAME SERVER STATS SERVER XS ENGINE INDEX SERVER
MEMORY
CLOUD
There is also a scale-out option which works through adding additional nodes leading a single
database being spread across multiple nodes.
HANA SID
INDEX SERVER INDEX SERVER INDEX SERVER INDEX SERVER
STATS SERVER STATS SERVER STATS SERVER STATS SERVER
XS ENGINE XS ENGINE XS ENGINE XS ENGINE
NAME SERVER NAME SERVER NAME SERVER NAME SERVER
CONFIG CONFIG CONFIG CONFIG
FILES FILES FILES FILES
NODE 1 NODE 2 NODE 3 NODE 4
CLOUD
4
In order to protect against power failures and system crashes, HANA has the concept of persisting
all changed data of the four HANA servers along with their logfiles to disk as so-called savepoints.
Those savepoints are scheduled to happen periodically (typically every 5 minutes). Savepoints are
consistent images of the database. After a crash the in-memory database gets loaded from the
last savepoint and is rolled forward to the latest transaction using the externalized logfiles.
• File-level backup, where the data gets written to a local storage location
• Snapshots using storage-level snapshot technologies
• Backint for HANA, which is a SAP-defined streaming backup interface for connecting centralized
3rd party backup solutions to backup HANA
As of now, Backint for HANA is the backup approach preferred by many HANA users as it is the
only way for connecting a SAP HANA landscape to the existing centralized backup solution like
Commvault® software in a SAP-certified way.
Commvault, as a market leading solution for enterprise backup, not only implements Backint for
HANA but also supports SAP HANA Snapshot interface as part of the Commvault iDataAgent for
SAP HANA. Commvault software protects scale-up and scale-out configurations for standalone
SAP HANA deployments alongside with SAP Business Suite Powered by HANA and SAP S/4 HANA
environments.
Data transfer via Backint for HANA works through named pipes. Communication between HANA
and the backup tool is done facilitated using the so-called input and output files. For instance, for
a backint backup call, HANA allocates named pipes and passes their full path via the input file.
Once the backup has completed, Commvault software returns backup status information (like
the backup IDs) and return codes via the output file back to HANA. According to the standard, the
Commvault SAP HANA agent executable is named “backint” and is accessed via a logical link in a
defined directory
PREPARATION
Before the installation of the Commvault agent for SAP HANA can start, the SID of the SAP HANA
database must be determined. Above that, we also need to make sure that two directories exist,
which are required during and after installation:
• SAPHANAEXE directory, which will hold the symbolic link to Commvault SAP HANA agent later-on
- /usr/sap/<SID>/SYS/global/hdb/opt
• Backint for HANA agent configuration file directory
- /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig
If either one of these directories is missing, it needs to be created manually as user <SID>adm.
This activity needs to be repeated on all SAP HANA nodes in a multi-node environment.
5
INSTALLATION
Commvault SAP on HANA agent can be installed from command line or using push install.
Remember, this needs to be done on all SAP HANA nodes.
After installation, the link to Commvault backint executable should be available (if not, create it
manually as user <SID>adm, see also next section):
PUSH INSTALL
After selecting the SAP HANA agent, make sure you enter the correct OS group (see above) before
pushing out the agent.
For V11 up to SP3, the path for SAPHANAEXE is not requested by the installer. Therefore, the
symbolic link to Commvault backint must be created manually after completion of the push install.
Starting with SP4, SAPHANAEXE path will be requested during push install.
6
CONFIGURATION
Before running Backups and Restores, the configuration must be completed by creating and linking
a parameter file, creating a pseudo-client in Commvault GUI and switching on Backups via Backint
in SAP HANA Studio.
With Commvault V11 it is important to name the parameter file “param” and place it under /opt/
commvault/iDataAgent. It should be given read-write permissions (664), assigned to group sapsys
and owned by user <SID>adm.
The configuration procedure must be repeated on EACH HANA node in a multi-node environment.
A shared parameter file for HANA multi-node configurations can be used.
7
COMMVAULT® GUI CONFIGURATION
In Commvault GUI, a pseudo client for the SAP HANA instance must be created.
The client name is arbitrary and can be whatever is preferred by the user. In addition, the following
entries are required:
8
• In the “Details” tab, the client name for all participating HANA nodes must be added. In this
example a single node environment is configured and therefore only one hostname is entered.
Please note, for HA or HANA system replication setups, the Primary Name Server must be the
first client/node selected, followed by the secondary clients/nodes.
Under tab “Storage Device” tab enter Storage Policies, Dedupe and Compression settings as
needed. As for setting up a suitable Storage Policy, it is recommended to configure the primary
copy to go to a disk library. This is because HANA environments can easily generate thousands
of logfile backup jobs per day. Backing up every single logfile to tape library or VTL would
create a massive overhead and also wear out the hardware quickly. If a tape copy is desired, it is
recommended to configure this as a secondary copy and run periodic auxcopy jobs every couple of
hours to copy all new logfiles to tape or VTL.
Please note that the HANA GUI instance is simpler in Version 10. This product version also does not
support subclients and Intellisnap for HANA. With Version 10 HANA backups can be scheduled by
using a shell script.
9
After clicking on “Configuration” and “Backint Settings”, one will notice that HANA has already
detected Commvault SAP HANA agent as it has found the symbolic link in the expected place. The
previously created parameter file should have been detected as well (if not just enter the HANA-
side path to symbolic link). Tick the box that instructs HANA to use the parameter file both for
HANA data and log backup.
Now all data backups will go to Commvault, but logfile backups will still go to file (which is the
default). In order to change the log file backup destination to backint, go to “Log Backup Settings”.
In order Under “Log Backup Settings”, select “Backint” as Destination type, then enable automatic
log backups and specify an appropriate backup interval. For testing, a 5 minute interval should be
fine in order to see quick results.
Finally, don’t forget to save the settings in SAP HANA Studio. After a while, one will notice logfile
backups coming into the Job Controller view.
10
ENHANCED MULTI-STREAMING
For newer versions of SAP HANA SPS11 and above, and as of Commvault V11 SP4, enhanced
multistreaming is available. Enhanced multi-streaming works through the new global.ini parameter
parallel_data_backup_backint_channels on the HANA side. For more information please follow the
link in the online documentation for Best Practices for the configuration in Commvault and look it up in
SAP online help for HANA-side configuration. Note that adding streams will require additional memory
and may result in a backup error with an “Out Of Memory” error. If this occurs, “data_backup_buffer_
size” parameter in the “global.ini->backups” HANA Studio configuration section may need to be reduced
to allow for more streams. The default parameter value is 512MB, but reducing to 256MB may allow for
the backups to run with additional streams (if more memory cannot be added to the environment).
BACKUPS
For running a full backup to SAP HANA Studio, right click the SID and select “Back Up”.
Select “Backint” as Destination Type and decide on backup type (full, differential or incremental).
One can also decide on a specific backup prefix, then hit continue:
11
On the next screen, hit Finish. Note that the backup is starting. One can watch all four HANA
servers being backed up in parallel.
In Commvault GUI a new job is displayed of the type “Application Command Line Backup” as this
was started from the HANA machine. By checking Commvault’s backup history, one can see the
successful differential backup and a number of logfile backups which happened in the meantime.
Please note that HANA Backup Catalog Backups also show up as logfile backups (marked as “log_
backup_0_0_0_0” in HANA backup catalog details)
When opening HANA’s backup catalog, one can easily correlate the listed jobs with the Commvault job
history by clicking on a single job and looking at the backup identifier (EBID). The last number in the
identifiers is the Commvault job ID (in this example job 12699 and EBID 62852948_12699, see below).
One can also see that the highlighted job is a differential backup consisting of multiple backup pieces,
representing the different HANA servers. By right-clicking on a specific job it can be deleted from
SAP HANA history and optionally from Commvault backup history at the same time.
12
SCHEDULING OF BACKUPS
After saving the new HANA instance in Commvault GUI a “default” subclient will be created
automatically. One can use this subclient or create additional subclients for scheduling full,
differential and incremental HANA backups or for using Intellisnap. Please note that using SAP
HANA incremental and differential backups requires SAP HANA SPS10 or higher. Scheduling
can be configured at subclient level or via a schedule policies. HANA logfile backups are always
initiated on the HANA side and cannot be scheduled using Commvault.
For scheduling with Commvault Version 10, one must create a little shell script, which invokes
the HANA hdbsql tool. First step for this is to create a backup user in SAP HANA Studio (having
all backup and restore privileges). In order to avoid having to specify a password clear text in
the script, a special key (BACKUPOPERATOR in our example below) needs to be created in HANA
hdbuserstore (with a password) and associated with the pre-created database backup user (which
has backup and restore permission). This allows the use of the “-U” option of hdbsql in the script
which doesn’t ask for a password. The script should also create and pass a unique identifier which
will be shown in HANA Backup Catalog and allows one to understand which backup was run by
whom and when. Here is a complete example script for a HANA instance HSB:
#!/bin/bash
TIMESTAMP=”$(date +\%F\_%k\%M)”
BACKUP_PREFIX=”SCHEDULED”
BACKUP_PREFIX=”$BACKUP_PREFIX”_”$TIMESTAMP”
13
In Commvault GUI, one can then create a subclient under the File System agent on the HANA node
(or control node in a scale-out setup). In the content tab, enter all HANA config file paths (we need
to back up theses files separately as they are not covered by the HANA DB backup).
In a final step, one must integrate the previously created script into the subclient. It is recommend
to add it as PreScan script, as this this will ensure that the scripts will be executed even if the file
system backup fails.
Once a backup of this subclient is initiated, one will get a file system backup job along with a HANA
full backup.
14
The name tag that was assembled by the script becomes the prefix of the backup name in HANA
Backup Catalog, which makes it easy to differentiate between manual and scheduled backups.
To schedule HANA backups using transaction DB13/DBACOCKIT, for instance in a “SAP Business
Suite powered by SAP HANA” setup, one must create a new action, select the desired action
type (e.g. Complete Data Backup) and choose “Backint” at “Destination Type” and make sure that
“Backup Destination” and “UTL File” have the same values as in HANA Studio. Please note that UTL
File is called Parameter File in HANA Studio.
15
HANA-SIDE TROUBLESHOOTING AND MONITORING
In case of issues, it’s always good to understand what’s going and what’s happened. On the SAP
HANA side it’s useful to examine the backint.log and backup.log.
While backup.log is more of a summary of the backup activity, backint.log shows the actual backint
(i.e. Commvault SAP HANA Agent) invocation and the CLI output of the agent. This log for instance
allows you to see if your parameter file was found and all parameters were correctly passed over
to Commvault.
16
Those logs can be found in SAP HANA Studio and also under
/usr/sap/<SID>/HDB<Inst No>/<hostname>/trace/backup.log
/usr/sap/<SID>/HDB<Inst No>/<hostname>/trace/backint.log
RESTORES
17
As HANA needs to be offline for the restore, one will be asked to agree to shut it down. In the
following screen, one can also select between full and point-in-time recovery. One can also select
to restore a backup taken from a different HANA system.
After selecting the recovery option, pick the desired full backup from a list of backups which is
taken from HANA backup catalog.
18
In the next screen, please make sure you instruct HANA to use backint for searching for required
log files. HANA will conduct this search before the restore is initiated.
After the restore is finished, HANA automatically requests all logs which are required to recover
the database.
After the database recovery (according to initial settings) has finished, all HANA services are
automatically started and the database is made available.
19
RESTORING HANA VIA COMMVAULT® GUI
As with other Commvault agents HANA restores can be initiated via selecting “Browse and
Restore” when right-clicking on the HANA instance.
In terms of restore granularity it is currently only possible to restore the entire database. In the
final screen one can decide between full and PIT recovery, enable a data-only restore or optionally
remove all existing logs for log area. Once the OK button is clicked, the restore job will automatically
shutdown the database, transfer the data and startup HANA again after the recovery phase.
20
HANA DATABASE COPY
Database copy is an out of place restore in conjunction with a change of the SID. One can create the
database copy on the same host or on a different host, for instance in order to create or refresh a
HANA test or development system. Please note that any SAP-side restrictions affecting database
copies using a third-party backup tool based on the backint interface apply. Please check SAP
HANA Administration Guide and SAP notes for more details.
COMMVAULT GUI
A HANA database copy is a cross system restore from Commvault’s perspective. For this, one
must start the restore dialog in the GUI from the source HANA instance. As an example, to restore
database TR2 running on host hana3 to TD2 running on host hana4. Select “browse and restore”
on instance TR2. In the final screen we need to select the correct destination client (HANA_TD2)
from the drop down. This will also populate the “Destination Instance” field. If more than one
HANA instance runs on the destination, make sure to select the correct one. Furthermore, please
make sure to select the “Initialize Logs” option which will clean up the log destination so that no
remaining logs from the old target instance can prevent proper recovery after the database copy.
Once the OK button is clicked, the database copy will run fully automated. There is no need to apply
changes to the HANA parameter file. Everything will be handled automatically in the background.
21
SAP HANA STUDIO
In the next example, to do it the other way around and restore database TD2 to TR2 using SAP
HANA Studio. The major differences between creating a database copy via Commvault GUI and via
HANA Studio are:
• A SAP-defined, special naming convention for the HANA parameter file on the target machine
needs to be followed (see SAP HANA Administration Guide, chapter 7 for more details). The SID
of the source database as part of the parameter file name of the target database instance
• The parameter SrcCrossClient needs to be present and point to the source database host.
Please note that this parameter is set automatically if one restores via Commvault GUI
22
The database copy can now be initiated on the target system by selecting “Backup and Recovery”
-> “Recover System”. After deciding between full and point-in-time recovery, one must enable the
“Backint System Copy” option on the following screen and enter the SID of the source system.
Now HANA requests (from Commvault) the backup catalog of the source database which
eventually gets displayed in the following screen, where one must select the desired full backup to
be restored.
23
Make sure that the correct source system gets displayed. In the next screen it is important to
enable the initialize log area option again.
24
PREPARATION
Using Intellisnap for HANA requires some preparation. First of all, one must install a Commvault
Media Agent instance on the HANA client machine (in addition to the HANA agent) and apply SP3
on all Commvault components. Secondly, it is important to understand that Commvault snaps the
HANA persistent data storage area only. On the HANA side, one must find out where the persistent
data storage area is located. This information can be found in the persistence section of global.ini
file. Look for parameter basepath_datavolumes. Making sure that the persistent data storage
area sits on a storage LUN which is supported by Commvault Intellisnap. Please note that HANA
log backups will always run in the streaming fashion.
In case of running SAP HANA in a virtual machine and want to use Intellisnap technology, one
should be aware that the HANA persistent data area cannot reside on a VMDK disk. Intellisnap only
supports storage mapped into VMs via:
• NFS
• iSCSI
It is sufficient to only place the HANA persistent data area on such a LUN or NFS share.
Once the storage-level configuration on the HANA side is done, some configuration work needs
to be done on the Commvault GUI. First, enable Intellinsap on the pseudoclient level in advanced
client properties area. Then configure the storage array hosting the HANA snapshot LUNs. Click on
“Manage Array” for this configuration. It can be sufficient to enter here a user and a password, but
depending on the storage product in use it might be required to enter additional information. See
Commvault Online Documentation for more details. This configuration is pretty much the same for
all Commvault agents supporting Intellinsap.
25
In a next step, create a new subclient under the HANA instance. Make sure to associate this
sublient with a storage policy that has a primary snap copy configured. In the “Intellisnap
Operations” tab, enable the Intellisnap option on the subclient level and also select the correct
snap engine for the storage product in use. Finally select a so-called proxy which will create
snapshot copies to disk or tape by mounting the snapshots. Any media agent running the same OS
as the HANA clients can be leverage for this.
BACKUP
When manually initiating a new HANA snapshot, one will notice that only full backups are available
(and supported). By checking the job monitor, notice that the job is marked as a snapshot backup:
In HANA Studio, one can also follow the snapshot creation while the job is running:
26
Once the job has finished, it is correctly displayed as a snapshot in HANA backup catalog.
Using “List Snaps” (on HANA instance level), one can monitor snapshot operations conducted by
Commvault.
HANA Snapshots can be selectively copied to disk/tape using the Backup Copy function of the
storage policy.
27
RESTORE
Restoring from HANA snapshots follows the described procedure via Commvault GUI. All one
needs to do is select a snapshot backup job, go through the dialog and start the restore. In this
case, the relevant snapshot will be mounted on the target system and all files will be copied
from there. The performance of this restore variant is largely depending on configuration and
throughput of the entire data path (Storage, SAN, LAN, etc.). Many storage vendors also support
a so-called revert restore, where no data is transferred. Instead, the snapshot is transformed
into the primary data volume. This restore option is usually much faster as it usually takes only
minutes. It is recommended to take a new snapshot backup immediately after the revert restore
has finished. This is because the revert operation usually invalidates all newer snaphots which
were taken after the snapshot which was used for the restore. Revert restore can be enabled
under advanced restore options (“Use hardware revert capability if available”).
Please note that restoring from Commvault Snapshots is currenty not supported via HANA Studio.
To restore from a snapshot copy (residing on tape or disk) using via HANA Studio or command line,
include the parameter CV_restCopyPrec alongside with the number of the storage policy copy
in the parameter file. When restoring from snapshot copy using Commvault GUI, configure the
“Copy Precedence” tab under advanced restore options. In this case, the parameter file can remain
unchanged.
28
Intellisnap feature of Commvault. The HANA database clone is established by cloning an existing
Intellisnap snapshot backup. This can be useful when:
PREREQUISITES
The following prerequisites must be met prior to performing a HANA database clone operation.
• Netapp
• Pure Storage
• EMC Timefinder Clone
The source database must have the array mounted off the root partition on the server. For
example, a typical file system for SAP HANA consists of 3 directories beneath a root “/hana”
directory.
For example:
• /hana/data
• /hana/log
• /hana/shared
In the above example the data volumes are stored under /hana/data/<SID> where <SID> equals
the instance name such as DB1 or TR2. Consequently on a typical HANA installation each of the 3
directories listed above would have a <SID> directory for each database installed and configured
on the server.
When using Intellisnap the database would need to be stored in an array, so an administrator
would potentially be tempted to mount the array in the <SID> subdirectory under /hana/data, but
currently there is a limitation where the clone operation expects to see the data volume mounted
directly under the root directory. So for this example the data volume is mounted under /hsnap
where the log and binaries are left under /hana/log and /hana/shared respectively.
The structure for a database called TD3 would then appear as follows:
• /hsnap/data/TD3
• /hana/log/TD3
• /hana/shared/TD3
In this example, the source database is using a NetApp filer for the array, and the data volume is
attached as an NFS mount:
29
CLONE DATABASE PREPARATION
The following additional steps are required on the destination in order for the Clone process to
succeed.
• Create the clone instance/database on the destination. There must be a SAP HANA database
existing on the destination in order to provide a target in the GUI for the clone. In this example a
database called CL1 was created. The data paths for the data, log, and install volumes can share
the same “/hana” parent directory.
• Register the clone database in a SAP HANA pseudo-client in the Commvault GUI. It can be in
its own pseudo-client or may share the pseudo-client of the source DB to be cloned. Please
refer to the previous sections and to the online documentation for creating a SAP HANA
Pseudo-Client and creating a New Instance for a SAP HANA Pseudo-Client. Without the Clone
database registered in the Commvault GUI it will not be shown in the drop-down list for the clone
operation. For this example the CL1 instance was added to the existing pseudo-client for the
source database TD3.
• The Clone database should also be configured with the symbolic links for hdbbackint as well
as the param file prior to the clone operation. This is also detailed in previous sections and in
the online documentation, see Configuring Multiple SAP HANA Instances and Creating the SAP
HANA Parameter File
• It is not necessary for the clone DB to be running when the clone operation is started, but even
if it is, the agent will properly shut it down before beginning the cloning process. In the example
shown here the clone database CL1 was left running prior to beginning the operation.
CLONING PROCEDURE
Once the prerequisites have been met, the clone process can be run. Follow the steps outlined in
the online documentation.
30
• Select the default “Latest Backup” and click the “View Content” button.
• In the resulting view select the database and click on the “Clone” button.
• In the resulting pop-up window select the destination client from the drop-down list (It will be
the same client since we added the CL1 database to the same client as the source) and also
select the destination database “CL1”
• For the “Destination Instance HANA Data Directory” point to the PARENT of the database
directory “/hana/data” NOT the actual database directory “/hana/data/CL1”
• In the “Clone Options” tab there is an option to select “Snap Mount Location” as well as a
“Reservation Period”. The snap mount location designates the temporary location where the
snapshot will be mounted, and the reservation period determines the duration of time that
the cloned database will exist. After that period expires, the clone is deleted and the snap is
unmounted from the server. For this example the defaults will be used.
• After clicking the OK button one last dialog will pop up prior to initiating the clone process. Click
the OK button to begin the cloning job.
• Note that 2 jobs will ultimately appear in the Job Controller… the GUI started restore, as well as a
“command line restore” that the client iDA initiates.
31
• As the clone operation nears the completion, application log backup jobs will begin to appear in
the job controller.
• At this point the clone operation is essentially complete and the CL1 database should be open
and accessible in HANA studio.
• On the actual Linux host, one can see the location where the iDA mounted the snapshot by
running the DF command:
hana4:~ # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 102096224 79742468 17167504 83% /
udev 16504788 112 16504676 1% /dev
tmpfs 25165824 72 25165752 1% /dev/shm
172.19.122.103:/vol/DPR_HANA2 19922944 9132544 10790400 46% /hsnap
172.19.122.103:/vol/DPR_HANA2_267937_SP_2_267795_8659_1475262087_CVclone
19922944 5536704 14386240 28% /opt/commvault/
Base64/Temp/267937
• Also if one were to look at the /hana/data directory one would see a symbolic link pointing to the
mounted snapshot
hana4:~ # cd /hana/data
hana4:/hana/data # ls -l
total 4
lrwxrwxrwx 1 root sapsys 40 Sep 30 19:20 CL1 -> /opt/commvault/Base/Temp/267937/data/TD3
drwxr-x--- 3 cl1adm sapsys 4096 Sep 27 13:11 CL1.b4clone
• Note that the original CL1 directory did not get removed, but instead was renamed to CL1.
b4clone. This is all done automatically by the agent during the cloning process.
• After about 1 hour (Or however long the clone was reserved) the clone will be shut down and the
mounts removed. Also note that the CL1.b4clone directory renamed back to the original CL1.
A couple of SAP-side requirements and caveats are in place when is comes to protecting SAP
HANA MDC installations via backint agents. First of all, HANA SPS 09 Rev 94 or higher is required.
Please also note that currently only streaming backup is possible at this time (no snapshots) and
restores can be in place only. See SAP Note 2096000 for more details.
32
CONFIGURATION
Configuring a HANA MDC instance on the OS level (installing the agent, symbolic links, parameter
file) and in Commvault GUI is done via using the SID of the MDC instance and is not different from a
non-MDC setup. In HANA Studio you will notice that only the system database has a configuration
tab as part of the backup console. This is the place where you need to put your configuration in
place as described before.
BACKUP
As of Commvaut V11 SP3, it is not (yet) supported to run and schedule backups out the GUI. This is
planned for a later V11 Service Pack. Backups can be initiated via HANA Studio or using a script. In
HANA Studio, when selecting “Backup and Recovery” on the level of the system database, one can
kick-off backups of the system or a tenant database.
After selecting tenant, one can now pick which tenant database to back up.
Afterwards, the backup is started and progressing just like we are used to it. When the job has
finished it is recorded in the backup catalog of the tenant database.
33
Please consult Commvault online documentation for an example of a tenant backup script.
RESTORE
Restores of SAP HANA MDC setup is supported via HANA Studio or via script. Restoring via HANA
Studio starts with picking the desired tenant.
In the following screen, one is presented with the backup catalog of that tenant. After selecting the
backup to restore from, start the process.
Please consult Commvault online documentation for an example of a script-based tenant restore.
34
INSTALL AND CONFIGURATION
Install and configure the software on each node in the replication environment as per the
Installation and Configuration instructions in this guide.
Once the agent is installed on every node in the environment, the SAP environment must also be
configured in HANA Studio to recognize the Commvault iDA. Follow the steps for SAP HANA Studio
Configuration in this guide to configure the Primary replication node for backup. This cannot be
done on the secondary node as it is in a “standby” operational mode, where it is not possible to
configure the backup parameters.
However, since SPS12, HANA ini file parameter replication is possible, where changes made on
the primary are automatically replicated to the secondary sites, including the backup parameters
configured to use the Commvault iDA. This can be done with a configuration parameter called
inifile_checker/replicate in the global.ini section in the configuration tab in HANA
Studio. The default setting is false but setting this to true will cause the replication process to
automatically transfer the backup settings in HANA for the primary database to the secondary
database. If this is not done, then after a takeover occurs, the secondary will need to be
configured BEFORE backups can run from that node. Another option would be to manually edit
the global.ini file on the secondary to match the backup configurations on the primary once the
primary has been configured. There is another parameter in the global.ini for inifile_
checker/enable which is enabled by default, so once the settings have been changed on the
primary node, the differences in the parameter file can be seen in HANA Studio by looking at the
alerts tab in the Studio GUI.
35
This can be used to determine what needs to be adjusted in the secondary node so that after
a takeover occurs the backups can start to run immediately without having to go through the
configuration in HANA Studio on the second node after a takeover event. For example, here are
parameters from the global.ini file on a test server in the lab:
[backup]
data_backup_parameter_file = /usr/sap/PDB/SYS/global/hdb/opt/hdbconfig/param
log_backup_parameter_file = /usr/sap/PDB/SYS/global/hdb/opt/hdbconfig/param
log_backup_using_backint = true
[inifile_checker]
replicate = true
[persistence]
basepath_logbackup = /usr/sap/PDB/HDB02/backup/log
basepath_databackup = /usr/sap/PDB/HDB02/backup/data
enable_auto_log_backup = yes
log_backup_timeout_s = 14400
log_mode = normal
The highlighted sections are the parameters that were updated on the primary through HANA
Studio when the configuration was updated for Commvault. Note that the “inifile_checker”
parameter is also stored here as well. If these changes are not made on the global.ini file on the
secondary node, the backups will not run to Commvault after a “takeover” occurs.
36
COMMVAULT CONFIGURATION
In the Commvault GUI under the instance properties of the HANA database, there is a “Details” tab
that contains ALL the nodes configured in the HANA replication environment. Here is a snaphot of a
lab server with two nodes configured in the instance:
As stated in the Documentation the Primary node MUST alsways be listed first in the details tab
or the backup when run from the CommServer will not work. After a “takeover” event the order of
these nodes must be updated in order to continue running the backups through the Commvault GUI
or Commvault scheduler. This can also be done from command line using the qcommand feature in
Commvault. For more information see the following link: Modifying SAP HANA Instances
Running backups through Hana Studio are not affected by the order of the nodes in the Instance
configuration, as long as the node being used to run the backup is listed in the details tab.
For running backups and restores see previous sections or the online documentation.
37
APPENDIX
Below are some common commands for managing replication in SAP HANA. These commands are run by the SAP
admin user <SID>adm.
• hdbnsutil -sr_disable
Disable replication. Executed from the Primary.
• hdbnsutil -sr_takeover
Initiate a “takeover” process. Executed on the secondary node.
For more information on replication or these commands, please refer to the SAP documentation on how to set up
HANA replication. This can be obtained at the following SAP link: How-to Perform System Replication for SAP
HANA Guide
© 2017 Commvault Systems, Inc. All rights reserved. Commvault, Commvault and logo, the “C hexagon” logo, Commvault Systems, Commvault
OnePass, CommServe, CommCell, IntelliSnap, Commvault Edge, and Edge Drive, are trademarks or registered trademarks of Commvault
Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to
identify the products or services of their respective owners. All specifications are subject to change without notice.