Вы находитесь на странице: 1из 99

Introducon to T S3500 T ape Li b ar y ..........................................................................................26 r 1.5.1 1.5.2 1.6 2 2.1 Components of the library .................................................................................................. 30 Tape Drives..........................................................................................................................

32

Advanced Library Management System .....................................................................................32 Introducing Bare Metal Restore.................................................................................................. 33 BMR Boot Servers ...............................................................................................................34 Protecting clients ................................................................................................................37

BMR Planning...................................................................................................................................... 33 2.1.1 2.1.2

Different Backup Methods..................................................................................................................38 3.1 3.2 3.3 3.4 Backups based on LAN ................................................................................................................38 SAN style backups via SAN Client................................................................................................40 SAN Configuration...............................................................................................................42 SAN style backups via SAN Media Server ...................................................................................43 Snapshot Backup.........................................................................................................................44 EMC CLARiiON Software Requirements.............................................................................. 45

3.2.1

3.4.1

99

3.4.2 3.5 3.5.1 3.5.2 3.5.3 3.6 3.7 3.8 3.6.1

Instant Recovery .................................................................................................................46 Configuring a backup policy for an Oracle database ..........................................................46 Adding backup selections to an Oracle policy .................................................................... 47 NetBackup for Oracle with Snapshot Client........................................................................ 48 Configuring a backup policy for an SQL Server database ...................................................48

NetBackup for Oracle..................................................................................................................46

NetBackup for SQL Server........................................................................................................... 48 VMware Backup ..........................................................................................................................49 Active Directory Backup.............................................................................................................. 51 Backup Policy for Active Directory ......................................................................................52 Requirement for Active Directory Granular Recovery ........................................................52

3.8.1 3.8.2 4 4.1

MCI Backup Requirements and Fulfillments.......................................................................................53 OS & Application Files .................................................................................................................53 OS & Application Reside On Local Disks.............................................................................. 53 OS & Application Reside On SAN Volumes ......................................................................... 61 SenSage Servers ..................................................................................................................66 Rating and Mediation Servers.............................................................................................69 Other Application Servers ................................................................................................... 71 Oracle Databases ................................................................................................................74 SQL Server Databases ......................................................................................................... 77 MYSQL Databases ...............................................................................................................78 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.2.3 4.3 4.3.1 4.3.2 4.3.3 4.4

Application Data Files..................................................................................................................64

Databases.................................................................................................................................... 74

Active Directory ..........................................................................................................................78 Licensing Vault ............................................................................................................................81 Vaulting Strategy.........................................................................................................................81 Vault Process...............................................................................................................................82 Vault Reports ..............................................................................................................................85

NetBackup Vault ................................................................................................................................. 80 5.1 5.2 5.3 5.4

Page of

99

5.4.1 5.4.2 5.4.3 6 6.1 6.2

Reports for media going offsite ..........................................................................................86 Reports for media coming on-site ......................................................................................87 Inventory reports ................................................................................................................87

Configuring storage.............................................................................................................................88 Configuring disk storage ............................................................................................................. 89 Configuring Tape Library............................................................................................................. 90 Media sharing .....................................................................................................................90 Naming Tape Drives ............................................................................................................ 91 Bar Code Label ....................................................................................................................92 Tape Volumes......................................................................................................................93 Volume pools ......................................................................................................................93 Tape Drive Cleaning ............................................................................................................ 95

6.2.1 6.2.2 6.2.3 6.2.4 6.2.5 6.2.6 7 7.1 7.2 7.3

Routine Operation ..............................................................................................................................95 NetBackup Catalog Backup ......................................................................................................... 95 Catalog Archiving ........................................................................................................................96 Monitoring and Reporting .......................................................................................................... 97 Activity Monitor ..................................................................................................................97 Auditing Manager ...............................................................................................................98 Reports utility......................................................................................................................98

7.3.1 7.3.2 7.3.3

Page of

99

Table 1: Hardware specicaons o e f ach o the N t B f e ackup ser ver s .........................................................14 Table 2: Client License Tiers........................................................................................................................18 Table 3: Server List with Standard Client License .......................................................................................22 Table 4: Server list with Standard Client + APP & DB Agent ....................................................................... 23 Table 5: Server List with Enterprise Client + APP & DB Agent Tier 2 ..........................................................23 Table 6: Server List with Enterprise Client Tier2.........................................................................................25 Table 7: Server List with Enterprise Client + APP & DB Agent Tier 3 ..........................................................26 Table 8: Server List with Enterprise Client + APP & DB Agent Tier 4 ..........................................................26 Table 9-TS3500 Tape Library Frame Types ................................................................................................. 28 Table 10- TS1040 Tape Drive Specicaon ................................................................................................. 32 Table 11: List of Required BMR Boot Servers .............................................................................................35 Table 12: BMR Boot Servers Specicaon .................................................................................................. 36 Table 13: EMC CLARiiON Snapshot Methods..............................................................................................44 Table 14: EMC Snapshot Sowa r e R equi rem nt ........................................................................................45 e Table 15: Oracle Backup Levels...................................................................................................................47 Table 16: MS-SQL Backup Levels.................................................................................................................49 Table 17: Required NFS Components ......................................................................................................... 53 Table 18: Policy Ar i b es fo O & A B ut r S PP ackup .......................................................................................54 Table 19: Schedule A ributes for OS & APP Backup ..................................................................................55 Table 20: Client List for OS & APP Backup Policies .....................................................................................58 Table 21: Backup Volume Calculaon - File System ....................................................................................59 Table 22: Transfer Rates- Different Network Technologies........................................................................ 59 Table 23: Summary Backup Policy- OS & APP.............................................................................................60 Table 24: O-Site Tape Requirement-OS & APP Backup ............................................................................ 60 Table 25: Disk Storage Requirement- OS & APP Backup ............................................................................ 60 Table 26: Storage Group Denion - OS & APP Backup .............................................................................. 61 Table 27: List of SAN Media Servers ........................................................................................................... 61 Table 28: Backup Policy Summary- File System for Boot ON SAN servers .................................................62 Table 29: Policy Ar i b es - OS & APP Backup-Boot on SAN Servers .........................................................62 ut Table 30: Schedule Ar i b es - OS & APP Backup-Boot on SAN Servers ....................................................62 ut Table 31: Client List- FS Backup for Boot on SAN Servers........................................................................... 63 Table 32: Disk Storage Requirement- OS & APP Boot on SAN.................................................................... 63 Table 33: O-Site Tape Requirement- OS & APP BootOnSAN.................................................................... 63 Table 34: STUG Denion - OS & APP Backup BootOnSAN......................................................................... 64 Table 35: Applicaon D t a F le o S ver s ................................................................................................65 a i s n er

Page of

99

Table 36: SenSage Backup Requirement .................................................................................................... 66 Table 37: SenSage Backup Volume Calculaon ..........................................................................................67 Table 38: Backup Volume- SenSage Backup ...............................................................................................67 Table 39: Tape Requirement- SenSage Backup ..........................................................................................67 Table 40: Backup Trac Rate-SenSage Servers..........................................................................................67 Table 41: Tape drive data transfer rate ...................................................................................................... 68 Table 42: Backup Rate- arch-sls Servers ..................................................................................................... 68 Table 43: Policy Denion - SenSage Backup ..............................................................................................68 Table 44: Schedule Denion - SenSage Backup.........................................................................................69 ..................................................................................70 Table 46: Policy Ar i b es - Mediation & Rating Servers ........................................................................... 70 ut Table 47: Schedule Ar i b es - Mediation & Rating Servers ...................................................................... 71 ut Table 48: Storage Requirement Summary.................................................................................................. 71 Table 49: Client List-1.................................................................................................................................. 72 Table 50: Client List-2.................................................................................................................................. 72 Table 51: Policy for other servers ...............................................................................................................72 Table 52: Schedule for other servers .......................................................................................................... 72 Table 53: Policy Summary- Other Servers................................................................................................... 73 Table 54: Disk Space Requirement ............................................................................................................. 73 Table 55: Tape Requirement.......................................................................................................................73 Table 56: Storage Units on Disk ..................................................................................................................73 Table 57: Oracle Databases.........................................................................................................................74 Table 58: Esm t ed si ze o O acl e D t abases ............................................................................................75 a f r a Table 59: Oracle Backup Summary-Large Size Database............................................................................ 76 Table 60: Oracle Backup Summary- Normal Size Database........................................................................ 76 Table 61: Storage Requirement- Large Size Oracle DB ...............................................................................77 Table 62: Storage Requirement- Normal Size Oracle DB............................................................................ 77 Table 63: MS-SQL Server Database............................................................................................................. 77 Table 64: Policy Ar i b es - MSSQL Server ................................................................................................. 77 ut Table 65: Schedule Ar i b es - MSSQL Server............................................................................................78 ut Table 66: MYSQL Database List...................................................................................................................78 ...............................................................................................79 Table 68: Acv e D rector y S i chedul e A i bu es .........................................................................................79 r t Table 69: Osite Volume Pools...................................................................................................................83 Table 70: Vault Catalog Backup ..................................................................................................................83 Table 71: Picking List for Robot report ....................................................................................................... 86 Table 72: Disk Storage Conguraon .......................................................................................................... 90 Table 73: Disk Storage Space Allocaon ..................................................................................................... 90

Page of

99

Table 74: Tape Drives Naming ....................................................................................................................92 Table 75: Media ID Generaon R e ........................................................................................................... 93 ul Table 76: Default Volume Pools..................................................................................................................94 Table 77: Volume Pools to be dened........................................................................................................ 94

Figure 1: NetBackup storage domain.......................................................................................................... 13 Figure 2- TS3500 Tape Library Frame Posion ...........................................................................................29 Figure 3: Components of the IBM System Storage TS3500 ........................................................................ 31 Figure 4: Sample BMR Network .................................................................................................................34 Figure 5: BMR Networking Topology .......................................................................................................... 35 Figure 6: Backup Trac Flow- LAN Backup................................................................................................. 39 Figure 7: Restore Trac Flow through LAN................................................................................................40 Figure 8: Backup Trac Flow- SAN Client................................................................................................... 41 Figure 9: Restore Trac Flow- SAN Clients ................................................................................................42 Figure 10: Backup Trac Flow-SAN Media Server .....................................................................................43 Figure 11: Snapshot Process .......................................................................................................................45 Figure 12: Backup Trac Flow- VMware backup .......................................................................................50 Figure 13: VMware Restore Trac Flow .................................................................................................... 51 Figure 14: Relaons hi p o V t a o her N t B s f aul nd t e ackup com onent s ........................................................80 p Figure 15: Vault Eject Interface...................................................................................................................84

Page of

10

99

Having sustainable operation environment is vital of importance for every business. In an environment like MCI Datacenter which hosts huge amount of valuable information and data; every minutes of downtime or service interrupt means losing business values which are harmful to the entire operation of the operator. To guarantee sustainable operation in such kind of environments a wide variety of technologies and techniques are used to ensure the high availability and data protection. The architecture of MCI Datacenter includes redundancy in different layers from entire I/O system to servers and applications by means of clustering or using Mirrors and RAIDs. These are building blocks for highly available data access, but not for the entire data protection solution. Mirrored and RAID arrays store blocks of binary data reliably, regardless of its meaning. It must be noted that a RAID array stores incorrect data just as reliably as it stores correct data. Mirroring and RAID do not protect against data corruption due to human errors or application faults. A regular backup with an enterprise backup and recovery application offers the only realistic protection against these causes of data loss. So the primary goals of the backup are to be able to do the following: Enable normal services to resume as quickly as is physically possible after any system component failure or application error. Enable data to be delivered to where it is needed, when it is needed. Meet the regulatory and business data retention requirements. Meet recovery goals, and in the event of a disaster, return the business to the required operational level. To achieve these goals, the backup and recovery solution must be able to do the following: Make copies of all the data, regardless of the type or structure or platform upon which it is stored, or application from which it is born. Manage the media that contain these copies, and in the case of tape, track the media regardless of the number or location. Provide the ability to make additional copies of the data. Scale as the enterprise scales, so that the technology can remain cost effective.
Page of

11

99

In the rest of this document well try to explain the backup and recovery architecture for MCI Tohid Datacenter which is designed to achieve mentioned goals as above. There are different types of Data and Data Stores available at MCI Tohid Datacenter which should be backed up and protected by centralized backup and recovery solution. According to the high level design these Data types can be categorized into two parts: 1. Data and files which belong to OS systems. OS systems which are used in this DC include: Solaris 10, RedHat Enterprise Linux 5, Windows 2008 and Windows 2003. 2. Application Data. This type of Data includes: a. Data belong to fundamental and management services like DNS, Active Directory, NMS and etc. b. Data belong to business applications like Billing, Automation System and Financial Systems which are business critical data. OS system files are stored on DAS (Direct Attached Storage) for entry-level and mid-range servers, for High-End servers these Data are stored on SAN (Storage Area Network). Application Data which include fundamental, management and security services are stored on DAS; in this case Data files dont occupy too much space and they are mostly reside on EntryLevel servers. The Data which belong to Web Applications and Application Software are also stored on DAS. These types of Data dont need high capacity and can be stored on local disks. The most important Data which are business critical Data are those which are related to MCI business. These kinds of Data are stored on Clustered Databases and SAN technology is used as the storage system. Besides the Data that mentioned above, there is another kind of Data; which has passed the first phase of their life cycle and they are stored using NAS technology; CDRs and event logs are the most important of this type. So the backup and recovery solution should cover all mentioned types of Data which are stored on different locations including SAN, NAS and DAS. Also it should be able to backup different OS types and Databases without interrupting the system operation.

Page

of

12

99

As virtualization is used inside MCI Datacenter to lower the CAPEX for Hardware and minimize the OPEX for the servers which require less resources; the backup solution should be able to take backup of Virtual Machines as well as the capability to take backup of different OS types. This document will describe the methods, topologies and techniques that are used by the backup solution to cover any part of the requirements as mentioned on the above. The first chapter which is the introduction will give us an overview of backup solution and specific features and functions of NetBackup. Well also have an introduction about the IBM tape library as the main storage system of our backup and recovery solution. Chapter 2 will explain one of the most important features of NetBackup as BMR. Chapter 3 will explain different backup methods that will be used to create backup images for different types of data. In chapter 4, well check the backup requirements specified by relevant MCI system administrators; and we plan to meet their requirements. Chapter 5 is about another option of NetBackup named Vault which is mostly used for off-site backup management. Chapter 6 is about the configuration of storage systems that will be used as target for backup images including the CX4-240 as disk storage and IBM TS3500 as tape storage. The last chapter will explain some major routine jobs for the operation of backup system.

NetBackup provides a complete, flexible data protection solution for a variety of platforms. The platforms include Microsoft Windows, UNIX, Linux, and NetWare systems. NetBackup administrators can set up periodic or calendar-based schedules to perform automatic, unattended backups for clients across a network. An administrator can carefully schedule backups to achieve systematic and complete backups over a period of time, and optimize network traffic during off-peak hours. NetBackup includes both the server and the client software as follows: Server software resides on the computer that manages the storage devices. Client software resides on computers that contain data to back up. (Servers also contain client software and can be backed up.) Figure 1 shows an example of a NetBackup storage domain.

Page

of

13

99

NetBackup accommodates multiple servers that work together under the administrative control of one NetBackup master server in the following ways: The master server manages backups, archives, and restores. The master server is responsible for media and device selection for NetBackup. Typically, the master server contains the NetBackup catalog. The catalog contains the internal databases that contain information about NetBackup backups and configuration. Media servers provide additional storage by allowing NetBackup to use the storage devices that are attached to them. Media servers can also increase performance by distributing the network load. During a backup or archive, the client sends backup data across the network to a NetBackup server. The NetBackup server manages the type of storage that is specified in the backup policy. During a restore, users can browse, and then select the files and directories to recover. NetBackup finds the selected files and directories and restores them to the disk on the client.

Page

of

14

99

For MCI Tohid Datacenter, NetBackup Enterprise solution will be deployed, which is composed of three main parts: Master Server, Media Server and client. Beside these three main roles well have another 3 servers as BMR boot servers which are used for the implementation of BMR, an important option of NetBackup solution. The installation of these BMR boot servers and their functionality will be covered later under BMR planning section. The following table lists the hardware specifications of each of the NetBackup servers.

1 2 3 4 5 6 7 8

mstr-nbkp-srv1 mstr-nbkp-srv2 media-nbkp-srv1 media-nbkp-srv2 linux-bmr-srv win-bmr-srv solaris-bmr-srv vm-bkphost-srv

VERITAS NetBackup Master Server Backup Management Server VERITAS NetBackup Master Server Backup Management Server Veritas NetBackup Media Server LAN Backup Management Server Veritas NetBackup Media ServerSAN Backup Management Server Veritas NetBackup Linux BMR Boot Server Veritas NetBackup Windows BMR Boot Server Veritas NetBackup Solaris BMR Boot Server VM Backup Host

DL380 DL380 DL380 DL380 DL380 DL380 T5140 DL380

2 2 2 2 2 2 1 2

Xeon x5570 Xeon x5570 Xeon x5570 Xeon x5570 Xeon x5570 Xeon x5570 Ultra Sparc T2 Plus Xeon x5570

4 4 4 4 4 4 4 4

4 4 8 8 4 4 8 8

2*146GB 2*146GB 2*146GB 2*146GB 2*146GB 2*146GB 2*146GB 2*72GB

1 1 1 1 1 1 1 1

1.5 1.5 18 31 2 2 2 2

1+0 1+0 5 5 5 5 5 5

1.5 1.5

4 4 4 4 4 4 4 4

2 2 2 2 2 2 2 2

2 2 2 2 2 2 2 2

CX4-960 CX4-960 CX4-240 CX4-240 CX4-240 CX4-240 CX4-240 CX4-240

Increase efficiencies by managing all data protection technologies and multiple NetBackup servers and domains from one location. Quickly restore files, emails and other granular items from Microsoft Exchange, SharePoint, and Active Directory and for hypervisors such as VMware and Hyper-V. Benefit from a flexible, three-tiered architecture that scales with the needs of todays growing data center
Page of

15

99

Fully automated and integrated system recovery with NetBackup Bare Metal Restore, built-in replication, and offsite tape management. Flexible encryption technologies for maximum data security while in transit or in media. For the protection of business-critical applications and databases, NetBackup provides application-aware agents that enable hot/online backup, wizard-based configuration, and support for application-specific tools such as Oracle Recovery Manager (RMAN). NetBackup provides a variety of technologies that ensure data can be recovered quickly, even instantly, from anywhere, and with minimal data loss.

Enable quick client restore from a single backup image for decreased application host impact and less network bandwidth Facilitates faster backups and restores since there is no tape device latency, and non-multiplexed backup images can be used for faster recovery Allows a failed backup or recovery job to be resumed from the last checkpoint Writes multiple data streams from one or more clients/servers to a single tape drive for optimum performance Enables the creation of multiple concurrent backup images, each with unique retention attributes, run either simultaneously with or after completion of the primary backup Reduce tape drive configuration time with the automatic generation of drive names and configuration of swapped tape drives Allows multiple NetBackup media servers to actively share a given tape media for write purposes

Page

of

16

99

NetBackup for Microsoft SQL Server delivers comprehensive data protection for SQL Server and SQL Server databases. Features and Benefits: Verify-only restores can be used to verify the SQL contents of a backup image without actually restoring the data Recovers SQL databases to the exact point in time or transaction log mark by rolling forward only the transactions that occurred prior to a user-specified date and time Display of database object properties provides backup and recovery flexibility

The NetBackup Oracle Agent is tightly integrated with the Oracle Recovery Manager (RMAN) to deliver high performance backup and recovery solutions. Features and Benefits: Keep the database online and increase reliability by eliminating manual processes and scripts Tightly integrated with the Oracle Recovery Manager (RMAN) wizard to deliver high performance backup and recovery NetBackup options enhance data protection environments with features listed as below:

Provides all the files and services necessary to perform system recovery, including the ability to perform diskless network booting, temporary OS installation, and disk configuration.

Designed for customers who would prefer to utilize their storage area network (SAN) for backup operations instead of their local area network (LAN). This feature enables LAN-free data protection with high-performance access to shared resources.
Page of

17

99

Offloads backup traffic from the LAN and allows for fast backups over the SAN at approximately 150 MB/sec. The SAN client can send data to a variety of NetBackup disk opons a a lo nd l ws us to back up and restore to disk over the SAN. Data is sent to media servers via SCSI commands over the SAN rather than TCP/IP over the LAN to optimize performance.

NetBackup can provide both client and virtual machine (VM) level protection using the vStorage API. This integration enables a single backup of VMware images to deliver granular file-level or full image-level recovery, reducing both the time and cost of VMware data protection.

Share tape drives across NetBackup media servers and a SAN for enhanced performance and deliver a higher return on investment for tape drive and library hardware. Some key benefits include: Minimized backup costsIncrease tape drive utilization and lower the total number of drives required Rapid deploymentGraphical wizards quickly discover and configure shared tape drives Increased fault toleranceAccess additional tape resources in the event of a drive or network failure; includes support for multiple paths to tape drives

Help ensure that tapes being transported offsite cannot be read in the event they are lost, mishandled, or stolen. MSEO provides maximum flexibility and performance by providing parallelized and selectable encryption and compression as well as "set it and forget it" key management. Some key benefits include: : Encrypt within the NetBackup policy, eliminating a separate process or an extra dedicated device to manage Choose what data you want to encrypt and then choose the appropriate compression and encrypon strengt h (A 1 -bit or AES 256-bit) ES 28 Includes support for disk staging to tape, the creation of tape copies for offsite purposes.

Page

of

18

99

There are two different licenses for the clients in NetBackup Enterprise solution. We have to choose the right license type for each host, so the backup operation can run properly. The following is a brief description about each of the license types and relevant license for each host among MCI Tohid Datacenter.

The NetBackup Standard Client contains key features such as bare metal restore, and client encryption. It resides on the same server as the application, database, or files that are being protected and sends data to a NetBackup server for protection or receives data during a recovery.

The NetBackup Enterprise Client contains the functionality of the Standard Client plus many more advanced features that maximize backup performance while potentially reducing impact of backups such as Snapshot client, SAN Client, SAN Media Server, and integrated protection for VMware and Hyper-V environments. NetBackup Enterprise Client is licensed per system (on a tier basis) and is ideal for systems requiring high performance, low impact protection. The tier of the required license is determined by the number of populated processor sockets on the machine as the following table:

Tier 1 Tier 2 Tier 3 Tier 4

1 to 2 processor sockets 3 to 4 processor sockets 5 to 12 processor sockets 13 or more processor sockets

1 processor socket 2 to 3 processor sockets 4 to 7 processor sockets 8 or more processor sockets

According to the above the type of client license on each of the servers will be as following table:

Page

of

19

99

1 2

chrgcrd-web-srv1 chrgcrd-web-srv2

Charge Card Web/Application Server Charge Card Web/Application Server VAS Box Web/Application Server (Pardis) VAS Box Web/Application Server (Pardis)

DL380 DL380

RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 Solaris 10 Solaris 10

2 2

Xeon x5570 Xeon x5570

3 4

vas-app-srv1 vas-app-srv2

T5440 T5440

2 2

UltraSparc T2 Plus UltraSparc T2 Plus

5 6

vas-app-srv3 vas-app-srv4

VAS Box Web/Application Server (Pardis) VAS Box Web/Application Server (Pardis)

T5440 T5440

Solaris 10 Solaris 10

2 2

UltraSparc T2 Plus UltraSparc T2 Plus

7 8

portal-web-srv1 portal-web-srv2

Portal Web/Application Server Portal Web/Application Server

T5440 T5440

Solaris 10 Solaris 10

4 4

UltraSparc T2 Plus UltraSparc T2 Plus

9 10

reg-web-srv1 reg-web-srv2

Registration Web/Application Server Registration Web/Application Server

T5440 T5440

Solaris 10 Solaris 10

4 4

UltraSparc T2 Plus UltraSparc T2 Plus

11 12

otp-app-srv1 otp-app-srv2

OTP Application Server OTP Application Server

T5440 T5440

Solaris 10 Solaris 10

2 2

UltraSparc T2 Plus UltraSparc T2 Plus

13 14

otp-rep-srv lms-netmgt-srv

OTP Report Server CiscoWorks LMS Network Management Server

DL380 T5140

Windows 2003 Solaris 10

2 1

Xeon x5570 UltraSparc T2 Plus

15 16

dcnm-netmgt-srv nfc-trfmon-srv

Cisco Data Center Manager Network Management Server Cisco NetFlow Collector Server

DL380 T5140

Windows 2003 Solaris 10

2 1

Xeon x5570 UltraSparc T2 Plus

17

bluct-trfmon-srv

BlueCoat Traffic Management Server

DL380

Windows 2003

Xeon x5570

18

hpic-srvmgt-srv

HP System Insight Manager Servers Management Server

DL380

Windows 2003

Xeon x5570

19 20

smc-srvmgt-srv vc-vmmgt-srv1

Sun Management Center Servers Management Server VMware vCenter Virtual Machines Mangement Server

T5140 DL380

Solaris 10 Windows 2003

1 2

UltraSparc T2 Plus Xeon x5570

21 22

vc-vmmgt-srv2 cfm-sanmgt-srv

VMware vCenter Virtual Machines Mangement Server Cisco Fabric Manager SAN Management Server

DL380 DL380

Windows 2003 Windows 2003

2 1

Xeon x5570 Xeon x5570

Page

of

20

99

23 24

emc-strgmgt-srv linux-bmr-srv

EMC Storage Systems Management Server Veritas NetBackup Linux BMR Boot Server

DL380 DL380

Windows 2003 RedHat Enterprise Linux 5 Windows 2003 Solaris 10

2 2

Xeon x5570 Xeon x5570

25 26

win-bmr-srv solaris-bmr-srv

Veritas NetBackup Windows BMR Boot Server Veritas NetBackup Solaris BMR Boot Server

DL380 T5140

2 1

Xeon x5570 UltraSparc T2 Plus

27 28

vm-bkphost-srv csm-secmgt-srv

VMware vStorage API Backup Host Server Cisco CSM Security Devices Mangement Server

DL380 DL380

Windows 2003 Windows 2003 RedHat Enterprise Linux 5 windows 2008

2 2

Xeon x5570 Xeon x5570

29 30

nsm-secmgt-srv msad-dir-srv1

Juniper NSM Security Devices Management Server MicroSoft Active Directory Server

DL380 DL380

2 2

Xeon x5570 Xeon x5570

31 32

msad-dir-srv2 ms-ca-srv1

MicroSoft Active Directory Server MicroSoft CA Server

DL380 DL380

windows 2008 windows 2008

2 1

Xeon x5570 Xeon x5570

33 34

ms-ca-srv2 ilm-auth-srv1

MicroSoft CA Server MicroSoft ILM Authentication Integration Server

DL380 DL380

windows 2008 Windows 2003

1 2

Xeon x5570 Xeon x5570

35 36

ilm-auth-srv2 bigfix-patch-srv

MicroSoft ILM Authentication Integration Server BigFix Patch Management Server

DL380 DL380

Windows 2003 Windows 2003

2 1

Xeon x5570 Xeon x5570

37 38

kasper-av-srv fprot-av-srv

Kaspersky Anti-Virus Management Server FPROT Anti-Virus Management Server

DL380 T5140

Windows 2003 Solaris 10

1 1

Xeon x5570 UltraSparc T2 Plus

39 40

csa-hipsmgt-srv trpw-chngmgt-srv

Cisco Security Agent HIPS Management Server Tripwire Change Management Server

DL380 DL380

Windows 2003 RedHat Enterprise Linux 5 Windows 2003 Windows 2003

1 1

Xeon x5570 Xeon x5570

41 42

sec-scan-srv prtus-isms-srv

Security Scanning and Analysis Server ISMS Implementation Tools Server

DL380 DL380

1 1

Xeon x5570 Xeon x5570

43 44

tvl-netview-srv tvl-mon-srv

IBM Tivoli NetView Total Network Management Server IBM Tivoli Monitoring Total Servers and Systems Management Server

DL380 DL380

Windows 2003 Windows 2003

2 2

Xeon x5570 Xeon x5570

45

tvl-ccmdb-srv

IBM Tivoli CCMDB Total Configuration Management Database Server

DL380

Windows 2003

Xeon x5570

Page

of

21

99

46

tvl-srm-srv

IBM Tivoli Service Request Management Server IBM Tivoli Application Dependency Discovery Manager Server IBM Tivoli Netcool/OMNIbus Server

DL380

Windows 2003

Xeon x5570

47 48

tvl-addm-srv tvl-omnibus-srv

T5140 T5440

Solaris 10 Solaris 10 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 Windows 2003 RedHat Enterprise Linux 5

1 4

UltraSparc T2 Plus UltraSparc T2 Plus

49 50

archv-colct-srv1 archv-colct-srv2

SenSage CDR Archiving Collector Server SenSage CDR Archiving Collector Server

DL380 DL380

2 2

X5670 X5670

51 52 53 54 55 56

archv-anlz-srv1 archv-anlz-srv2 snsg-colct-srv1 snsg-anlz-srv1 svn10-bkp-srv ext-dns-srv1 rslv-dns-srv1

SenSage CDR Archiving Analyzer Server SenSage CDR Archiving Analyzer Server SenSage Security Event Warehouse Collector Server SenSage Security Event Warehouse Analyzer Seven 10 Server External DNS Server DNS Resolver Server

DL381 DL382 DL380 DL380 DL380 DL380

2 2 2 2 2 2

X5670 X5670 X5670 X5670 E5620 X5670

57

ext-dns-srv2 rslv-dns-srv2

External DNS Server DNS Resolver Server

DL380

RedHat Enterprise Linux 5

X5670

58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74

int-mlgt-srv 1 int-mlgt-srv 2 int-mlst-srv-1 int-mlst-srv-2 emlrt-app-srv1 emlrt-app-srv2 rbt-app-srv1 rbt-app-srv2 rbt-app-srv3 rbt-app-srv4 smsbx-web-srv-1 smsbx-web-srv-2 smsbx-web-srv-3 smsbx-web-srv-4 sms-web-srv mmsbx-app-srv1 mmsbx-app-srv2

Internal Mail GatewayServer Internal Mail GatewayServer Internal Mail Store Server Internal Mail Store Server Send SMS When Receive E-mail Send SMS When Receive E-mail RingBack Tone Server RingBack Tone Server RingBack Tone Server RingBack Tone Server Web Service SMS Box Server 1 (URL Sender) Web Service SMS Box Server 2 (URL Sender) Web Service SMS Box Server 3 (URL Sender) Web Service SMS Box Server 4 (URL Sender) Web SMS Server (SMS Bulk) MMS BOX Applicaon S ver 1 er MMS BOX Applicaon S ver 2 er

DL380 DL380 DL380 DL380 DL380 DL380 T5440 T5440 T5440 T5440 DL380 DL380 DL380 DL380 DL380 T5440 T5440

RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 Solaris 10 Solaris 10

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 4 4

X5670 X5670 X5670 X5670 X5670 X5670 UltraSparc T2 Plus UltraSparc T2 Plus UltraSparc T2 Plus UltraSparc T2 Plus X5670 X5670 X5670 X5670 E5620 UltraSparc T2 Plus UltraSparc T2 Plus

Page

of

22

99

75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95

mmsbx-app-srv3 mmsbx-app-srv4 prvs-app-Srv1 prvs-app-Srv2 prvs-app-Srv3 prvs-app-Srv4 prvs-app-Srv5 prvs-app-Srv6 prvs-Test Bed-Srv prvsrep-App-srv prvsLeg-db-srv cmptel-tst-srv intbill-tst-srv agrt-web-srv3 agrt-web-srv4 BMSP-Vahdat-srv BMSP-Emam-srv BMSP-Yeganehsrv BMSP-Ray-srv BMSP-Resalat-srv BMSP-Ghods-srv bcc-file-srv1 incc-file-srv1 sim-file-srv1 vch-file-srv1 bcc-file-srv2 incc-file-srv2 sim-file-srv2 vch-file-srv2 bcc-mdlwr-srv1 cc-mdlwr-srv1 incc-insvc-srv1 bcc-mdlwr-srv2 cc-mdlwr-srv2 incc-insvc-srv2 bill-web-srv1 intbill-web-srv1 bill-web-srv2 intbill-web-srv2 bill-xlate-srv1 sa-app-srv1 bill-xlate-srv2 sa-app-srv2

MMS BOX Applicaon S ver 3 er MMS BOX Applicaon S ver 4 er Comptel Service Provisioning Primary Server Balancing the Load-1 Comptel Service Provisioning Primary Server Balancing the Load-2 Comptel Service Provisioning Primary Server Balancing the Load-3 Comptel Service Provisioning Primary Server Balancing the Load (Hot Backup)-1 Comptel Service Provisioning Primary Server Balancing the Load (Hot Backup)-2 Comptel Service Provisioning Primary Server Balancing the Load (Hot Backup)-3 Comptel Service Provisioning Test Bed Comptel Service Provisioning Reporting Tool Application Comptel Service Provisioning Legacy Database Server Test Server for Comptel Test Server for Interconnect Aggregate Web Service Server 3 Aggregate Web Service Server 4 Failover Servers, Tehran BMSP Servers Failover Servers, Tehran BMSP Servers Failover Servers, Tehran BMSP Servers Failover Servers, Tehran BMSP Servers Failover Servers, Tehran BMSP Servers Failover Servers, Tehran BMSP Servers BehPardaz BCC File Server BehPardaz IN CC File Server BehPardaz SIM Bank Application (File) Server BehPardaz Voucher Bank Application (File) Server BehPardaz BCC File Server BehPardaz IN CC File Server BehPardaz SIM Bank Application (File) Server BehPardaz Voucher Bank Application (File) Server BehPardaz BCC (BMSP) Middleware Server BehPardaz Customer Care Middleware Application Server BehPardaz IN Service Application Server BehPardaz BCC (BMSP) Middleware Server BehPardaz Customer Care Middleware Application Server BehPardaz IN Service Application Server BehPardaz Billing Web/Application Server Intec Interconnect Billing Web/Application Server BehPardaz Billing Web/Application Server Intec Interconnect Billing Web/Application Server BehPardaz Billing Translator Application Server BehPardaz Service Provisioning Application Server BehPardaz Billing Translator Application Server BehPardaz Service Provisioning Application Server

T5440 T5440 DL580 DL580 DL580 DL580 DL580 DL580 DL580 DL580 DL380 T5140 T5440 DL580 DL580 DL380 DL380 DL380 DL380 DL380 DL380

Solaris 10 Solaris 10 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 Solaris 10 Solaris 10 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 windows 2003 windows 2003 windows 2003 windows 2003 windows 2003 windows 2003

4 4 4 4 2 4 4 2 2 4 1 2 2

UltraSparc T2 Plus UltraSparc T2 Plus X7560 X7560 X7560 X7560 X7560 X7560 X7560 X7560 X5670 ULTRA SPARC T2 Plus ULTRA SPARC T2 Plus

96

DL380

windows 2003

X5670

97

DL380

windows 2003

X5670

98

DL580

windows 2003

X7560

99 100 101 102 103

DL580 T5440 T5440 DL580 DL580

windows 2003 Solaris 10 Solaris 10 Windows 2003 Windows 2003

4 4 4 4 4

X7560 UltraSparc T2 Plus UltraSparc T2 Plus X7560 X7560

Page

of

23

99

1 2

bank-gw-srv1 Rep-Portal-srv1 bank-gw-srv2 Rep-Portal-srv2 tap-in-srv1 tap-out-srv1 int-tap-srv1

Banks Vosooli Gateway Application Server 10.0.0.177 Banks Vosooli Gateway Application Server 10.0.0.177 Tap-In Application/Database Server (Roaming) Tap-Out Application/Database Server (Roaming) Kish & Esfehan Roaming (10.100.0.233)

VM-DL380-1 VM-DL380-2

DL380 DL380

Windows 2003 Windows 2003

2 2

Xeon x5570 Xeon x5570

VM-DL580-1

DL580

Windows 2003

Xeon x7460

tap-in-srv2 tap-out-srv2 int-tap-srv2 sms-bill-srv1 ecare-web-srv1 sms-bill-srv2 ecare-web-srv2

Tap-In Application/Database Server (Roaming) Tap-Out Application/Database Server (Roaming) Kish & Esfehan Roaming (10.100.0.233) SMS Billing Application/Database Server e-care Internet Billing Web/Application SMS Billing Application/Database Server e-care Internet Billing Web/Application

VM-DL580-2

DL580

Windows 2003

Xeon x7460

5 6

VM-DL580-1 VM-DL580-2

DL580 DL580

Windows 2003 Windows 2003

2 2

Xeon x7460 Xeon x7460

1 2 3 4

chrgcrd-db-srv1 chrgcrd-db-srv2 otp-db-srv1 otp-db-srv2

Charge Card Database Server Charge Card Database Server OTP Database Server OTP Database Server

chrgcrd-db-DL380-1 chrgcrd-db-DL380-2 otp-db-T5440-1 otp-db-T5440-2

DL380 DL380 T5440 T5440

RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 Solaris 10 Solaris 10

2 2 2 2

Xeon x5570 Xeon x5570 UltraSparc T2 Plus UltraSparc T2 Plus

Page

of

24

99

edch-ftp-srv1 EDCH FTP Server int-ftp-srv1 Internal FTP Server edch-ftp-srv2 EDCH FTP Server int-ftp-srv2 Internal FTP Server SenSage Data Center Events Data Warehouse SLS Server SenSage Data Center Events Data Warehouse SLS Server SenSage Data Center Events Data Warehouse SLS Server SenSage Data Center Events Data Warehouse SLS Server SenSage Data Center Events Data Warehouse SLS Server

LDOM-T5440-1

T5440

Solaris 10

UltraSparc T2 Plus

LDOM-T5440-2

T5440

Solaris 10 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise

UltraSparc T2 Plus

3 snsg-log-srv1

snsg-log-rx2660-1

rx2660

2 Intel Itanium 9140M

4 snsg-log-srv2

snsg-log-rx2660-2

rx2660

2 Intel Itanium 9140M

5 snsg-log-srv3

snsg-log-rx2660-3

rx2660

2 Intel Itanium 9140M

6 snsg-log-srv4

snsg-log-rx2660-4

rx2660

2 Intel Itanium 9140M

7 snsg-log-srv5

snsg-log-rx2660-5

rx2660

2 Intel Itanium 9140M

8 archv-sls-srv1 SenSage CDR Archiving SLS Server

archv-sls-rx2660-1

rx2660

2 Intel Itanium 9140M

9 archv-sls-srv2 SenSage CDR Archiving SLS Server

archv-sls-rx2660-2

rx2660

2 Intel Itanium 9140M

10 archv-sls-srv3 SenSage CDR Archiving SLS Server

archv-sls-rx2660-3

rx2660

2 Intel Itanium 9140M

11 archv-sls-srv4 SenSage CDR Archiving SLS Server

archv-sls-rx2660-4

rx2660

2 Intel Itanium 9140M

12 archv-sls-srv5 SenSage CDR Archiving SLS Server

archv-sls-rx2660-5

rx2660

2 Intel Itanium 9140M

13 archv-sls-srv6 SenSage CDR Archiving SLS Server

archv-sls-rx2660-6

rx2661

2 Intel Itanium 9140M

14 archv-sls-srv7 SenSage CDR Archiving SLS Server

archv-sls-rx2660-7

rx2662

2 Intel Itanium 9140M

15 archv-sls-srv8 SenSage CDR Archiving SLS Server 16 archv-sls-srv9 SenSage CDR Archiving SLS Server

archv-sls-rx2660-8 archv-sls-rx2660-9

rx2663 rx2664

2 Intel Itanium 9140M 2 Intel Itanium 9140M

Page

of

25

99

Linux 5 17 archv-sls-srv10 SenSage CDR Archiving SLS Server 18 rating-srv1 19 rating-srv2 Rating Server Rating Server archv-sls-rx2660-10 rating-app-T5440-1 rating-app-T5440-2 rx2665 T5440 T5440 RedHat Enterprise Linux 5 Solaris 10 Solaris 10 2 Intel Itanium 9140M 2 2 UltraSparc T2 Plus UltraSparc T2 Plus

1 2 3 4 5 6 7 8 9 10 11 12 13

bcc-db-srv1 bcc-db-srv2 incc-db-srv1 incc-db-srv2 cmptel-medsrv1 cmptel-medsrv2 intbill-db-srv1 intbill-db-srv2 vas-db-srv1 vas-db-srv2 portal-db-srv1 portal-db-srv2 reg-db-srv1

BehPardaz Post-paid Customer Care Database Server BehPardaz Post-paid Customer Care Database Server BehPardaz Pre-paid Customer Care Database BehPardaz Pre-paid Customer Care Database Comptel Mediation Server Comptel Mediation Server Intec Interconnect Billing Database Server Intec Interconnect Billing Database Server VAS Box Database Server VAS Box Database Server Portal Database Server Portal Database Server Registration Database Server

M9000-1 M9000-2 M9000-1 M9000-2 med-T5440-1 med-T5440-2 M9000-1 M9000-2 vas-db-T5440-1 vas-db-T5440-2 LDOM-T5440-1 LDOM-T5440-2 LDOM-T5440-1

M9000 M9000 M9000 M9000 T5440 T5440 M9000 M9000 T5440 T5440 T5440 T5440 T5440

Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10

4 4 4 4 4 4 4 4 4 4 4 4 4

Sparc64 VII Sparc64 VII Sparc64 VII Sparc64 VII UltraSparc T2 Plus UltraSparc T2 Plus Sparc64 VII Sparc64 VII UltraSparc T2 Plus UltraSparc T2 Plus UltraSparc T2 Plus UltraSparc T2 Plus UltraSparc T2 Plus

Page

of

26

99

14

reg-db-srv2

Registration Database Server BehPardaz Voucher Bank Database Server BehPardaz Voucher Bank Database Server BehPardaz Service Provisioning Database Server BehPardaz Service Provisioning Database Server

LDOM-T5440-2

T5440

Solaris 10

UltraSparc T2 Plus Sparc64 VII Sparc64 VII

15 16

vchbnk-db-srv1 vchbnk-db-srv2

vchbnk-db-T5440-1 vchbnk-db-T5440-2

T5440 T5440

Solaris 10 Solaris 10

4 4

17 18

sa-db-srv1 sa-db-srv2

sa-db-T5440-1 sa-db-T5440-2

T5440 T5440

Solaris 10 Solaris 10

4 4

Sparc64 VII Sparc64 VII

19 20

1 2 3 4

bill-db-srv1 bill-db-srv2 prvs-db- srv1 prvs-db- srv2

BehPardaz Billing Database Server BehPardaz Billing Database Server Comptel Service Provisioning Database Server-1 Comptel Service Provisioning Database Server-2

M9000-1 M9000-2 M9000-1 M9000-2

M9000 M9000 M9000 M9000

Solaris 10 Solaris 10 Solaris 10 Solaris 10

8 8 8 8

Sparc64 VII Sparc64 VII Sparc64 VII Sparc64 VII

Note: Those servers which are listed in one single row will be installed on VM; therefore we consider one license per ESX server.

The IBM TS3500 Tape library is chosen as the tape library soluon fo M I Tohi d D t a C er . r C a ent The IBM System Storage TS3500 Tape Library is designed to provide a highly scalable, automated tape library for open systems backup and archive in midrange to enterprise
Page of

27

99

environments. The TS3500 tape library is also ordered with a dual accessor model opon to help increase the mount performance and overall system reliability. The TS3500 tape library is designed to provide the flexibility required to help address the system capacity and performance requirements of the most demanding applications by accommodang u to 1 d i v in u to si x p 92 r es p teen T S3500 tape lib ar y fram s . T T r e he S3500 tape library is designed with a variety of advanced features. The TS3500 Tape Library offers the following enhancements: Enhanced data accessibility through dual accessors that increase speed and provide failover protection Enhanced data security through support for tape drive encryption and write-once-readmany (WORM) cartridges Increased storage capacity with high-density frames that greatly increase capacity without requiring more floor space Some additional features of the TS3500 Tape Library are listed below: Ability to attach multiple simultaneous heterogeneous servers Remote management using a web browser or the TS3500 Command Line Interface Remote monitoring using Simple Network Management Protocol (SNMP) Multipath architecture Drive/media exception reporting In-depth reporting using the Tape System Reporter Host-based path failover An individual library consists of one base frame and up to 15 expansion frames and can include up to 192 tape drives and more than 20 000 tape cartridges. When the second accessor of the tape library is installed, the TS3500 Tape Library features enhanced availability by utilizing an additional accessor. The additional accessor enables the library to operate without disruption if any component of the working accessor fails. As another advantage, cartridge mount performance is also optimized. (A mount occurs when the accessor removes a cartridge from a drive, returns it to its storage slot, collects another cartridge from a random storage slot, moves it, and loads it into the drive.)

Page

of

28

99

When dual accessors are installed and master server issues a command for cartridge movement, the library automatically determines which accessor can perform the mount in the most timely manner. If the library's primary accessor fails, the second accessor assumes control and eliminates system outage or the need for operator intervention. The tape library for MCI Tohid Data Center includes four Frame Models as listed in Table 9TS3500 Tape Library Frame Types in below:

HA1

Service Bay A

N/A

N/A

This is required for second accessor. Contains slots for diagnostic cartridges only. Equipped with the enhanced frame control assembly Equipped with the enhanced frame control assembly. Configured as service bay B. Contains gripper test slots for diagnostic cartridges, and also contains unusable storage slots.

L53

Base Frame

LTO Ultrium

Up to 12 drives and up to 287 cartridges Up to 12 drives and up to 440 cartridges

D53

Expansion frame

LTO Ultrium

D53

Service Bay B

N/A

N/A

After the installation as you view the library from the front, service bay A (the HA1 frame) is on the far left and service bay B is on the far right. The following figure - TS3500 Tape Library Frame Position -shows the location of service bays in the TS3500 Tape Library.

Page

of

29

99

TS3500 Tape Library Frame Posion

Page

of

30

99

The following describes the major parts of the TS3500 Tape Library. 1. Library frames The base frame is named L53 and the expansion frame is D53. Each frame contains a rail system, cartridge storage slots, and 12 tape drives. There are another two frames as Service Bay A & Service Bay B which are needed to deploy dual accessor configuration. 2. Rail system The assembly on which the cartridge accessor moves through the library. The system includes the top and bottom rails. 3. Cartridge accessor The assembly that moves tape cartridges between storage slots, tape drives, and the I/O stations. 4. Accessor controller A circuit board that facilitates all accessor motion requests (such as calibrations, moves, and inventory updates). The second accessor also has a second accessor controller. 5. Cartridge storage slots Cells that are mounted in the TS3500 Tape Library and used to store tape cartridges. 6. IBM LTO Ultrium tape drives Mounted in the TS3500 Tape Library, 12 units that read and write data that is stored on tape cartridges. 7. Front door The front door of any frame. 8. Door safety switch A device in each frame that shuts down the motion power to the cartridge accessor whenever the front door is opened.

Page 30 of 99

31

99

9. I/O stations A cartridge compartment on the front door of the base frame of the TS3500 Tape Library that allows us to insert or remove tape cartridges without the library performing a reinventory of the frame. 10. Operator panel Located on the front of the base frame, the operator panel is the set of indicators and controls that lets us perform operations and determine the status of the library. 11. Enhanced frame control assembly An assembly of components that facilitate RS-422 communicaon bet w en the dr i v i n a e es frame and the accessor controller and operator panel controller. It includes two power supplies, both of which can provide power to the library and all drives in a frame. 12. Patch panel A panel that houses the cable connections from the Fibre Channel interfaces on each LTO Drive.

Figure 3: Components of the IBM System Storage TS3500

Page 31 of 99

32

99

The IBM tape library for MCI Tohid Data Center is congured to have 24 LTO Ultrium-4 tape drives. The LTO Ultrium-4 tape drives are high-performance, high-capacity data-storage units that are installed in the TS3500 Tape Library. There are 12 drives installed in each base and expansion frame of the library. The following Table 10 lists the specifications of Ultrium-4 Tape Drives:
Type of Drive IBM System Storage LTO Ultrium-4 Tape Drive Speed of Connectivity 4 Gbps Fibre Native Data Rate 120 MB/s Native Capacity 800 GB (745.06 GB) Other Information Also known as the TS1040 tape drive

Table 10- TS1040 Tape Drive Specicaon

The most highlighted features for Ultrium 4 tape drives: Speed Matching Speed matching dynamically adjusts the drive's native (uncompressed) data rate to the slower data rate of a server. Channel Calibration Channel calibration customizes each read/write data channel for optimum performance. The customization enables compensation for variations in the recording channel transfer function, media characteristics, and read/write head characteristics. Power Management Power management reduces the drive's power consumption during idle power periods. ALMS is an extension of IBM's patented Multi-Path Architecture. With ALMS, the TS3500 Tape can virtualize the locations of cartridges (called SCSI element addresses) while maintaining native SAN attachment for the tape drives. ALMS enables logical libraries to consist of unique drives and ranges of volume serial (VOLSER) numbers, instead of fixed locations. ALMS offers dynamic management of cartridges, cartridge storage slots, tape drives, and logical libraries. For MCI Data Center as we are using NetBackup to control the operation and

Page 32 of 99

33

99

functionality of the Tape Library, we just use the ALMS for some special configuration and hardware troubleshooting.

NetBackup Bare Metal Restore (BMR) is one of the most valuable options of NetBackup. BMR is the server recovery option of NetBackup that can be used in situation of disaster recovery or any other time that we need to restore the whole system with its operation system, Hardware drivers, installed applications and others. BMR automates and streamlines the server recovery process, making it unnecessary to reinstall operating systems or configure hardware manually. Administrators can restore servers in a fraction of the time without extensive training or tedious administration, using NetBackup BMR option. BMR restores the operating system, the system configuration, and all the system files and the data files with the following steps: Run one command from the NetBackup master server. Reboot the client. Separate system backups or reinstallations are not required. The components of a BMR protection domain are as following: BMR master server The NetBackup BMR master server manages backups and restores of the protected client systems and the operation of BMR. NetBackup media servers NetBackup media servers control storage devices on which the client files are stored. BMR boot servers Boot servers provide the environment that is required to rebuild a protected client, including resources such as shared resource trees (SRTs). Shared resource trees contain the software that is used to rebuild the protected system so that NetBackup can restore the original files. The software includes the operating system software and the NetBackup client software.

Page 33 of 99

34

99

Clients Clients are the systems backed up by NetBackup and protected by BMR. A client may also be a server for other applications or data, a NetBackup media server, or a BMR boot server. The figure in below depicts the general topology of BMR backup:

Figure 4: Sample BMR Network

For each type of client that we want to protect the relevant boot server need to be installed. For example, a Solaris client requires a Solaris boot server, a Windows client requires a Windows boot server, and so on. For UNIX, Linux, and legacy Windows restores, a boot server at a particular operating system version can only host SRTs of the same operating system version or lower. For example, a Solaris 9 boot server can host Solaris 8 and Solaris 9 SRTs, but not Solaris 10 SRTs. The servers inside MCI Tohid Data Center are running four different types of OS: RedHat Enterprise Linux 5, Windows 2003, Windows 2008 and Solaris 10. Based on the above facts we need three boot servers as the following:

Page 34 of 99

35

99

Row 1 2 3

Server Name linux-bmr-srv win-bmr-srv solaris-bmr-srv

Server Type DL380 DL380 T5140

Operating System RedHat Enterprise Linux 5 Windows 2008 Solaris 10

Application Veritas NetBackup Linux BMR Boot Server Veritas NetBackup Windows BMR Boot Server Veritas NetBackup Solaris BMR Boot Server

Table 11: List of Required BMR Boot Servers

Each network segment of clients must have a BMR boot server that can support the clients; so the network interfaces of these boot servers should be configured in a way that Boot Servers have a physical IP presence on multiple networks. Figure 5 in the following shows the networking topology for BMR Boot Servers connection:

Figure 5: BMR Networking Topology

The following tables show the specifications of each of the Boot Servers:
Server Name Win-bmr-srv

Page 35 of 99

36

99

Server Name

Win-bmr-srv

Server Name

Linux-bmr-srv

Server Name

Solaris-bmr-srv

Table 12: BMR Boot Servers Specification

Page 36 of 99

37

99

A client is protected after a NetBackup backup policy that is configured for BMR protection backs it up. Backups must occur before a client fails and requires a Bare Metal Restore. Each protected client must be backed up regularly by at least one policy that performs a full backup. The policy also can perform cumulative incremental or differential incremental backups, but a full backup must occur. The backup saves the files of the computer on a storage device that NetBackup manages. The backup saves the configuration of the client on the BMR master server. After a client is backed up by a policy that is configured for BMR protection, the client is registered with BMR as a protected client. It then appears in the Bare Metal Restore Clients view in the NetBackup Administration Console. We can use one policy or multiple policies to protect a single client. The following are the requirements for protecting BMR clients: A policy must be either type MS-Windows (for Windows clients) or Standard (for UNIX and Linux clients). A policy must have the Collect disaster recovery information for Bare Metal Restore attribute set. To integrate all backup policies and schedules we defined the policies to back up all the servers to be protected by BMR in Section 4 along with other types of backups. Please refer to section 4 to check out the defined policies and schedules. As mentioned before shared resource tree (SRT) is a collection of the following: Operating system files NetBackup client software Other programs to format drives, create partitions, rebuild file systems, and restore the original files using the NetBackup client software.

Page 37 of 99

38

99

A shared resource tree must be created on a local file system of the boot server. BMR sets permissions for the SRT directory to allow read access to all and read and write access to the root or Administrator user. To create an SRT, the installation software or images for the following items are needed: Operating system (UNIX and Linux only). For Linux SRTs, the Bare Metal Restore third-party products CD. This CD contains the open source products that may not be included in the vendor Linux distribution. Optional: Other applications or packages. Optional: Patches, maintenance levels, Maintenance Packs, service packs, or the drivers that the operating system requires or other software that is installed in the SRT. You must install into the SRT any operating system patches that the NetBackup client software requires. If they are not installed, NetBackup does not function correctly in the temporary restore environment, and the restore may fail.

NetBackup provides different options and agents to back up different objects; even for the same object there are different methods to be selected as backup approach. In this section well explain about different methods that will be used in MCI Tohid Datacenter as our approaches to back up different types of Data. The next section, when the backup requirement is mentioned, we just name the backup method that will be used to fulfill the specific requirements; and the detail explanation about each method can be found in this section. LAN backup is the most common method used. This is the simplest and cheapest method. Backup LAN involves setting up an additional LAN and setting another set of IP addresses on relevant clients; the detail of IP addressing and network segmentation is not in the scope of the current document and needs to be referred to Network Planning Document. The disadvantage of this method is the low speed of LAN if the amount of Data to be backed up is huge. Also it may affect the performance of LAN network negatively. In this method the backup traffic will pass through the LAN network between NetBackup client and LAN media server (media-nbkp-srv1).
Page 38 of 99

39

99

This method no need any special license to be activated. By installing NetBackup client and entering the NetBackup Standard Client license, we can back up the data over LAN. Figure 6 in the following depicts the backup traffic flow between different components.

Figure 6: Backup Traffic Flow- LAN Backup

As it is shown on the above figure, the communication between NetBackup Master Server and NetBackup clients will go through LAN, also the backup traffic will pass through different clients to the NetBackup Media server (media-nbkp-srv1) and then writes to the target storage units. In the restore process the direction of traffic flow would be reversed. It means the image of data will be read by media-nbkp-srv1 from tape or CX-240 and then it will be transferred to the target server via LAN. The following figure depicts the restore traffic flow:

Page 39 of 99

40

99

Figure 7: Restore Traffic Flow through LAN

SAN Client is a NetBackup optional feature that provides high speed backups and restores of NetBackup clients. In this method the backup and restore traffic occurs over a SAN, and NetBackup server and client administration traffic occurs over the LAN. This method is useful when we are going to back up huge amount of data in a short time. By using SAN as the transport network for backup traffic we can lower the required time for backup and also we eliminate the negative effect of backup traffic on limited LAN network. Figure 8, in below is showing the backup traffic flow between different components of NetBackup system.

Page 40 of 99

41

99

Figure 8: Backup Traffic Flow- SAN Client

Fibre Transport connections between NetBackup clients and NetBackup FT Media Server (media-nbkp-srv2) are referred to as FT pipes. The SAN client can be in a cluster and can host clustered applications. The FT client service and the Symantec PBX service must run on all failover nodes. The backup policy references to client computers can be to aliases or dynamic application cluster names. In this scenario well use either disk or tape as a storage destination for the SAN Client backup. With tape as a destination we can use multi streaming, which divides automatic backups for a client into multiple jobs. Because the jobs are in separate data streams, they can occur concurrently. The data streams can be sent over one or more FT pipes to the FT media server. The media server multiplexes them together onto one or more tape media volumes. In the restore process, the backup image will be read by media-nbkp-srv2 from tape or CX-240. Then the image will be transferred to the target host through FT pipe (SAN) and finally will be written to the central storage system by the server which owns the data.
Page 41 of 99

42

99

The following picture shows the traffic flow for restore process:

Figure 9: Restore Traffic Flow- SAN Clients

The configuration and zoning of SAN is not in the scope of this document and it is explained in SAN Design document, anyway the following are important considerations about the HBA configuration on NetBackup media server and preferred SAN zoning configuration. Two 4Gb FC ports on NetBackup media server (media-nbkp-srv2) will be used for the connections to the SAN clients. The HBA must be configured to use the NetBackup target mode driver. The HBA ports on the SAN clients must operate in the default initiator mode. Another two 4Gb FC ports will be connected to storages. The HBA ports that connect to the storage must remain in the default initiator mode. About the zoning of SAN, Symantec recommends the following zones:

Page 42 of 99

43

99

An FT traffic zone that includes only the SAN clients and the NetBackup FT media servers HBA ports that connect to the SAN clients. The backup storage zone should include the storage and the FT media server HBA ports that connect to the storage. These zones prevent SAN Client traffic from using the bandwidth that may be required for other SAN activity. SAN media servers are NetBackup media servers that back up their own data. SAN media servers cannot back up data that resides on other clients. SAN media servers are useful for certain situations. SAN media servers will use SAN as their transport network for backup traffic, the same as SAN clients, but they have another main advantage that they can share tape resources with NetBackup Master and Media Servers. The Shared Storage Option of NetBackup (SSO) needs to be licensed to enable sharing the tape drives between different SAN media servers and NetBackup media servers (media-nbkp-srv1, 2). Figure 10, in the following shows the backup traffic flow between different components of NetBackup system. As we can see the backup traffic is directly send from SAN Media Servers to IBM tape library of disk storages.

Figure 10: Backup Traffic Flow-SAN Media Server

Page 43 of 99

44

99

This solution is mainly used in case we have large size database on the server (NetBackup Client) that needs to be backed up frequently. So well need to do the licensing for relevant Application and Database Agent as well as licensing for SAN Media Server. To define a backup policy for a SAN media server, the SAN media server is added only as the client. A snapshot is a point-in-time, read-only, disk-based copy of a client volume. NetBackup can back up the original volume or the snapshot that is created from original volume. The contents of the snapshot volume are cataloged as if the backup was produced directly from the primary volume. After the backup is complete, the snapshot-based backup image on storage media is indistinguishable from a backup image produced by a traditional, non-snapshot backup. Users and client operations can access the primary data without interruption while data on the snapshot volume is being backed up NetBackup includes a set of software libraries that are called "snapshot providers." The providers enable Snapshot Client to access the snapshot technology in the storage subsystem. The EMC CLARiiON disk array enables NetBackup to create hardware snapshots in the CLARiiON array. There are two main methods provide by EMC CLARiiON to create snapshots as listed in the following table:
Snapshot method EMC_CLARiiON_Snapview_Clone EMC_CLARiiON_SnapView_Snapshot Description and notes For full-volume mirror snapshots with EMC CLARiiON disk arrays. For space-optimized, copy-on-write snapshots with EMC CLARiiON disk arrays.

Table 13: EMC CLARiiON Snapshot Methods

If the snapshot method is specified in the definition of the backup policy, NetBackup will use the snapshot of the Data to be backed up. When the policy runs, the snapshot method calls the EMC CLARiiON provider library. The provider then accesses the commands of Unisphere Secure CLI in the CX4-960 storage subsystem to create the snapshot.

Page 44 of 99

45

99

The following table includes software requirements to make Snapshots using EMC CLARiiON arrays.

Table 14: EMC Snapshot Software Requirement

The figure in below shows the software components on the NetBackup clients and the CLARiiON array, and indicates the contro5l function of each.

Figure 11: Snapshot Process

At the moment there are some uncertainties about using snapshots for the backup creation. 1- The amount of free space on CX4-960 is not nalized yet, so we cant decide about allocated space for snapshot purpose.

Page 45 of 99

46

99

2- The CX4-960 will use FLARE OS 30 according to the latest design and the Navisphere will be replaced by Unisphere. As currently there are not so much stuff about the compabi lity o N t B f e ackup 7 1 a U i spher e, it is n 1 . nd n ot 00% sur e that snaps hot s c an be created by the current version. This feature makes backups available for quick recovery from disk. Instant Recovery combines snapshot technology with the ability to do rapid snapshot-based restores. The snapshot is retained on disk as a full backup image. The snapshot can also be the source for an additional backup copy to tape or other disk based storage. Because snapshots require disk space, they cannot be retained forever. To balance the space consumed against the convenience of having multiple snapshots available for instant recovery, we have to limit the number of snapshots to retain. NetBackup integrates the database backup and recovery capabilities of the Oracle Recovery Manager (RMAN) with the backup and recovery management capabilities of NetBackup. In this case NetBackup for Oracle supplies the Veritas I/O Library (ibobk) and Oracle Database software will supply RMAN + OCI. Using this combination, NetBackup can automate and centralize the backup operation of Oracle databases. A backup policy for Oracle database defines the backup criteria for the backup job. These criteria include the following: Storage unit and media to use Policy attributes Backup schedules Clients to be backed up Backup templates or script files to be run on the clients The first four criteria are common for different types of backup, and the last one is particularly defined for Oracle database backups. With a few exceptions, NetBackup manages a database backup like a file system backup. One major difference is that backup levels that are defined

Page 46 of 99

47

99

will be based on the Oracle backup levels. Table 15 in the following lists specific levels used for Oracle backup.

Full backup

A full backup copies all blocks into the backup set, skipping only data file blocks that have never been used. Note that a full backup is not the same as a whole database backup; "full" is an indicator that the backup is not incremental. A full backup has no effect on subsequent incremental backups, which is why it is not considered part of the incremental strategy. An incremental backup is a backup of only those blocks that have changed since a previous backup. RMAN lets you so on. A level 0 incremental backup, which is the base of subsequent incremental backups, copies all blocks containing data. When you generate a level n incremental backup in which n is greater than 0, you back up the following: All blocks that have been modified since the most recent backup at a level n or lower. This condition is the default type of incremental backup. It is called a differential incremental backup. All blocks that have been modified since the most recent backup at level n-1 or lower. This condition is called a cumulative incremental backup. In a differential level n incremental backup, you back up all blocks that have changed since the most recent backup at level n or lower. For example, in a dierenal le 2 b vel ackup, y b ou ack u a l b o ks that a e m di si n e the la p l l c r o ed c st level 2, level 1, or level 0 backup. In a cumulative level n incremental backup, you back up all blocks that have changed since the most recent backup at level n-1 or lower. For example, in a cumulav e le 2 b vel ackup, y b ou ack u a l b o ks that a e changed si n e the m s t p l l c r c o recent level 1 or level 0 backup.
Table 15: Oracle Backup Levels

Incremental backup

Multilevel incremental backup

Differential incremental backup

Cumulative incremental backup

The backup selections list in a database policy has a different meaning than that for nondatabase policies. For example, in a Standard or MS-Windows policy, the list contains files and directories to be backed up. In a database policy, we specify templates or scripts to be run. The script needs to be provided by DB administrator for each particular system.

Page 47 of 99

48

99

NetBackup for Oracle can be used in conjunction with NetBackup Snapshot Client. In this way NetBackup for Oracle can back up Oracle objects by taking snapshot images of the component files. Later, it backs up the snapshot version to the storage unit. Snapshot backup captures the data at a particular instant without causing significant client downtime. Client operations and user access continue without interruption during the backup. The resulting capture or snapshot can be backed up without affecting the performance or availability of the database. Using Snapshot backup makes backups available for instant recovery from disk. Optionally, the image is retained on disk as well as backed up to storage. To use NetBackup for Oracle with Snapshot Client, we must have both NetBackup Snapshot Client and NetBackup for Oracle installed and licensed.

NetBackup for SQL Server extends the capabilities of NetBackup for Windows to include backing up and restoring SQL Server databases. NetBackup for SQL Server includes a client-based graphical user interface (GUI) program to perform various activities on SQL Server. These activities include the following: Configuration of options for NetBackup for SQL Server operations. Backups and restores of databases and database components, which include transaction logs, differentials, files, and file groups. Monitoring NetBackup for SQL Server operations. To use the SQL Server Agent to back up the database, we must set the policy type as MS-SQLServer type. These criteria related to this policy type include the following: Storage unit and media to use Policy attributes Backup schedules Clients to be backed up The batch files to run on the clients

Page 48 of 99

49

99

From the listed criteria on above all of them are common for different policy types and the last one is the one specifically defined for SQL Server Backup. The same as Backup for Oracle Databases, MS-SQL backup has its own definition for different backup levels. Table 16 in the following lists different types of backup that can be performed on MS-SQL databases:
Term Full MS-SQL Backup Levels Definition The database, including all of its component files are backed up as a single image. The log file is included in a full database backup. All of the changes since the last full are backed up to a single image.

Differential

Transaction Log

Transaction log backups are only available for the full and bulk-load recovery options. In this operation, the inactive portion of the transaction log is backed up.
Table 16: MS-SQL Backup Levels

To protect the data on virtual machines there are two basic procedures. Each of these methods offer performance advantages as well as limitations. 1. Local backup: This involves installing a standard NetBackup client inside each virtual machine and backing up the virtual machine the same way that would be used if it was a physical system. This backup methodology is popular because the implementation process is essentially the same as with physical machine backups. 2. Off-host backups: This method takes advantage of the vStorage API for Data Protection. Introduced with vSphere 4, this method o-loads backup processing from the ESX server to a separate backup server referred as the VMware Backup Host. The first architecture essentially gives us a backup solution that closely resembles that of physical servers, is simple to configure, and provides the same file level restore capabilities of traditional backups. The advantages of local backup for virtual machines can be listed as below:

Page 49 of 99

50

99

Simple and familiar implementation. Backup administrators are more familiar with this method and they can easily manage the virtual machine backups using this method. Single file or folder backups and restores for all supported Guest OSs. Single file restores directly into the Guest OS are supported. Application and database backups (e.g. Microsoft SQL Server and Oracle) are supported. The configuration process is exactly the same as configuring the same type of backup within a physical system. The disadvantages of local backup method are: Full virtual machine images (vmdk files) are not backed up making entire virtual machine restores (e.g. disaster recovery) more complex. The backup processing load on one virtual machine may negatively impact ESX resources available to other virtual machines hosted on the same physical server. Well need one standard client license on each guest operating system. According to the suggestion from the project consultant, well use the second method to back up the VMs. In this case we need less standard client licenses, as we just use one license per ESX server. The following picture shows the backup traffic flow using VMware Backup Host:

Figure 12: Backup Traffic Flow- VMware backup

Page 50 of 99

51

99

For the restore process the same components will be involved, but the direction of the data flow would be reversed as the following picture:

Figure 13: VMware Restore Traffic Flow

NetBackup uses MS-Windows policy type to back up systems running on Windows. It also has a special feature to back up the systems with Active Directory installed. A NetBackup policy that backs up the Active Directory can be configured to allow the restore of the objects and attributes in the Active Directory. The objects and attributes can be restored locally or remotely without the interruption of restarting the domain controllers where the restore is performed. Any Active Directory backup is always a NetBackup full backup, whether it is a granular backup or not. Whenever Active Directory is in a policys Backup Selections list, the Active Directory

Page 51 of 99

52

99

portion is always fully backed up, even when the backup type is incremental, differential or cumulative. Any other items in the Backup Selections list may use a differential or cumulative incremental backup type as indicated. To back up the systems with Active Directory installed, we define the policy with one special option named enable granular recovery . When this policy attribute is enabled users can restore the individual objects that reside within a database backup image, such as a user account from an Active Directory database backup. The backup image for Active Directory must be written to disk in order to perform Granular-level restores. The following shows the steps for the configuration of a policy to back up the Active Directory system: 1. The NetBackup Legacy Client Service (bpinetd) must be running under the domain administrator account on the Active Directory domain controller. (In this case, the Active Directory domain controller is the NetBackup client.) 2. The policy type should be set as MS-Windows. 3. The granular recovery option must be enabled. If this option is not enabled, the backup still runs, but the backup cannot produce granular restores. 4. The Schedules can be configured as needed. 5. The Directive set in the Backup Selections tab should be one of the following: System_State Shadow Copy Components ALL_LOCAL_DRIVES NetBackup Granular Recovery leverages Network File System, or NFS, to read individual objects from a database backup image. Specifically, the NetBackup client uses NFS to extract data from the backup image on the NetBackup media server. Table 17 in the following lists the NFS Components required for Windows 2003 R2 SP2.
NFS Component Clients for NFS Microsoft Services for NFS Administration RPC External Data Representation NetBackup Client X X X X NetBackup Media Server

Page 52 of 99

53

99

NFS Component RPC Port Mapper

NetBackup Client

NetBackup Media Server X

Table 17: Required NFS Components

There are different types of Data available inside MCI Tohid Data Center. Different data can be classified based on their importance, the under laying storage subsystem which the data are stored on, the type of Data and some other parameters. The backup system in MCI Tohid Data Center should be able to back up all existing data on different servers. Different data have different backup requirements and need different approaches to back them up. In this section well explain the major types of Data and the backup strategy for each of them. There are three major Data Types from Backup strategy point of view: 1. OS& Application Files 2. Application Data Files 3. Databases This type of Data includes the system files of the OS, configurations and registry files related to OS as well as the files belong to installed applications or agents on each system. According to the results of information gathering from different system administrators, this type of Data doesnt change so frequently as the OS and key applications on each server change rarely. Referring to the design document, this kind of Data resides on local disks of each system (DAS) for most of the servers inside MCI Tohid DC. Also there are some few business critical servers, which boot from SAN, and in this case; the OS and application files reside on central storage system (EMC CLARiiON CX4-960). The following we will check the backup solution for each of the mentioned cases on the above. To back up the OS and applications installed on each server, we plan to have a full backup of the disks attached to each server on a weekly basis. Considering the fact that this data doesnt change too often, there is no need to have Incremental backups of these servers.

Page 53 of 99

54

99

The policy type which is used in this case is MS-Windows for all Windows server and Standard type for all other types of OS including Solaris and RedHat. The directive for these policies will be set as ALL_LOCAL_DRIVE and it helps us to back up all the local drives of each server without needing to know the partition information of every single server. It also helps to have a valid backup of the server, without the need to change any policy or system information in case the partitioning of the server changes later. For example when a new partition is added to the server, there is no need to manually change the policy that back up the server, because the policy with ALL_Local_Drives can catch the new added partition. The backup image will be stored on the storage units of CX4-240 and would be retained for 4 weeks. As weekly full backups are stored for 4 rounds (one month); we will have a usable backup image even if the backup process fails for any specic server for 3 m s in a m nt h. The b e o ackup m t hod in this e case would be Backup based on LAN which is mentioned in detail in previous section. It means that the backup traffic will pass through the network interfaces that are dedicated to backup. The following tables show the corresponding policy and schedule attributes that will be defined to accomplish the mentioned backup job, the rest of tables show the client list related to each defined policy.
Policy Attributes
Policy Name POL_FSBackup_Win_1 POL_FSBackup_SOL_1 POL_FSBackup_SOL_2 POL_FSBackup_LNX_1 Policy Type MS-Windows Standard Standard Standard Used Directive ALL_LOCAL_DRIVES ALL_LOCAL_DRIVES ALL_LOCAL_DRIVES ALL_LOCAL_DRIVES Collect disaster recovery information for Bare Metal Restore Enabled Enabled Enabled Enabled Policy storage DSUG_FSBackup_1 DSUG_FSBackup_1 DSUG_FSBackup_1 DSUG_FSBackup_1

Table 18: Policy Attributes for OS & APP Backup

Page 54 of 99

55

99

Schedule Attributes
Schedule Name SDL_Weekly_Full_WINFS_1 SDL_Weekly_Full_SOLFS_1 SDL_Weekly_Full_LNXFS_1 SDL_Weekly_Full_LNXFS_2 Type of Backup Full Backup Full Backup Full Backup Full Backup Frequency One Week One Week One Week One Week Retention Four Weeks Four Weeks Four Weeks Four Weeks Keep true image restoration 0 Days 0 Days 0 Days 0 Days Backup Window 00:00 ~ 08:00 00:00 ~ 08:00 00:00 ~ 08:00 00:00 ~ 08:00 Schedule On Saturday Sunday Monday Tuesday

Table 19: Schedule Attributes for OS & APP Backup Policy [POL_FSBackup_WINFS_1] Client List Row
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Server Name
bluct-trfmon-srv svn10-bkp-srv cfm-sanmgt-srv csm-secmgt-srv dcnm-netmgt-srv emc-strgmgt-srv hpic-srvmgt-srv media-nbkp-srv1 mstr-nbkp-srv1 mstr-nbkp-srv2 vm-bkphost-srv vc-vmmgt-srv1 vc-vmmgt-srv2 win-bmr-srv otp-rep-srv csa-hipsmgt-srv ilm-auth-srv1 ilm-auth-srv2 kasper-av-srv msad-dir-srv1 msad-dir-srv2 ms-ca-srv1 ms-ca-srv2

OS
Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2008 Windows 2008 Windows 2008 Windows 2008

Page 55 of 99

56

99

Policy [POL_FSBackup_WINFS_1] Client List Row


24 25 26 27 28 29 30 31 32 33 34 35 36

Server Name
prtus-isms-srv sec-scan-srv tvl-ccmdb-srv tvl-mon-srv tvl-netview-srv tvl-srm-srv bigfix-patch-srv BMSP-Vahdat-srv BMSP-Emam-srv BMSP-Yeganeh-srv BMSP-Ray-srv BMSP-Resalat-srv BMSP-Ghods-srv

OS
Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 Windows 2003 windows 2003 windows 2003 windows 2003 windows 2003 windows 2003 windows 2003

Policy [POL_FSBackup_LNXFS_1] Client List Row Server Name OS


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 archv-colct-srv1 archv-colct-srv2 archv-anlz-srv1 archv-anlz-srv2 archv-sls-srv1 archv-sls-srv2 archv-sls-srv3 archv-sls-srv4 archv-sls-srv5 archv-sls-srv6 archv-sls-srv7 archv-sls-srv8 archv-sls-srv9 archv-sls-srv10 linux-bmr-srv media-nbkp-srv2 nsm-secmgt-srv RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5

Policy [POL_FSBackup_LNXFS_2] Client List Row Server Name OS


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 rbt-app-srv4 smsbx-web-srv 1 smsbx-web-srv 2 smsbx-web-srv 3 smsbx-web-srv 4 sms-web-srv prvs-app-Srv 1 prvs-app-Srv 2 prvs-app-Srv 3 prvs-app-Srv 4 prvs-app-Srv 5 prvs-app-Srv 6 prvs-Test Bed-Srv prvsrep-App-srv prvsLeg-db-srv trpw-chngmgt-srv snsg-colct-srv1 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5

Page 56 of 99

57

99

Policy [POL_FSBackup_LNXFS_1] Client List Row Server Name OS


18 19 20 21 22 23 24 25 26 27 28 29 30 chrgcrd-db-srv1 chrgcrd-db-srv2 chrgcrd-web-srv1 chrgcrd-web-srv2 int-mlgt-srv 1 int-mlgt-srv 2 int-mlst-srv 1 int-mlst-srv 2 emlrt-app-srv 1 emlrt-app-srv 2 rbt-app-srv1 rbt-app-srv2 rbt-app-srv3 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5

Policy [POL_FSBackup_LNXFS_2] Client List Row Server Name OS


18 19 20 21 22 23 24 25 snsg-anlz-srv1 snsg-log-srv1 snsg-log-srv2 snsg-log-srv3 snsg-log-srv4 snsg-log-srv5 agrt-web-srv3 agrt-web-srv4 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5 RedHat Enterprise Linux 5

Policy [POL_FSBackup_SOLFS_1] Client List Row Server Name OS


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 nfc-trfmon-srv smc-srvmgt-srv solaris-bmr-srv rating-srv1 rating-srv2 otp-app-srv1 otp-app-srv2 otp-db-srv1 otp-db-srv2 portal-db-srv1 portal-db-srv2 portal-web-srv1 portal-web-srv2 reg-db-srv1 reg-db-srv2 reg-web-srv1 reg-web-srv2 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10

Page 57 of 99

58

99

Policy [POL_FSBackup_SOLFS_1] Client List Row Server Name OS


18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 sa-db-srv1 sa-db-srv2 simbnk-db-srv1 simbnk-db-srv2 vas-app-srv1 vas-app-srv2 vas-app-srv3 vas-app-srv4 vas-db-srv1 vas-db-srv2 vchbnk-db-srv1 vchbnk-db-srv2 mmsbx-app-srv 1 mmsbx-app-srv 2 mmsbx-app-srv 3 mmsbx-app-srv 4 cmptel-tst-srv intbill-tst-srv fprot-av-srv lms-netmgt-srv tvl-addm-srv tvl-omnibus-srv Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10

Table 20: Client List for OS & APP Backup Policies

There are some points to be considered regarding the mentioned backup policies: 1. For the servers which include significant amount of Data files except the OS and application files, the paths to those directories should be excluded from the client backup selection. This is in case the application Data like CDRs are stored in specific directories on local disks. Those Data will be backed up separately by another policy. Examples of this kind of servers are: cmptel-med-srv, rating-srv, tap-in-srv, and tap-outsrv. 2. The policies defined above will be used for BMR purpose as well. As specified on policy attributes table, we enable the Collect Disaster Recovery Information for Bare Metal

Page 58 of 99

59

99

Restore option in these policies. The policy then backs up the client with all information needed for BMR restore.

Backup Traffic Rate on Network We consider the size of occupied space on local drives (except any possible application Data) for each individual server is around 20GB. This is only the size of OS and other key applications installed on the server. The backups are scheduled to happen in 4 dierent days, so the data traffic rate can be estimated as following:
Average Occupied Space on each server Number of Clients(App roximate) Total Amount of Data for Backup/Day Window Size Average Data Rate Per Hour Average Data Rate Per Second

20 GB

40

800 GB

8 Hours

100GB/Hour

227 mbps

Table 21: Backup Volume Calculation- File System

The following table shows the typical transfer rates of different network technologies:
Network Technology Theoretical gigabytes per hour Typical gigabytes per hour

100BaseT (switched) 1000BaseT (switched) 10000BaseT (switched)

36 360 3600

25 250 2500

Table 22: Transfer Rates- Different Network Technologies

As we are using a dedicated Gigabit Ethernet Network for backup traffic, typically we can have up to 250 Gigabytes per hour of backup trac. Our calculaon accor di n to Table 21 shows g that the average traffic rate would be less than 100 GB per hour, so the network can handle the backup traffic. Every month one copy of the backup image for each of the servers mentioned above will be duplicated to tape; the media is selected from Off-Site Volume Pool to be sent to the off-site locaon; thi s copy w ll b ret ai n fo 3 m nt hs . So in case o d saster recover y w can recove r i e ed r o f i e the server to the state of less than a month ago. (For more details about off-site storage please refer to section 5 NetBackup Vault).

Page 59 of 99

60

99

Ae r 3 m nt hs ; the im ge o o -site tape is expired and the tape should be returned back to o a n the site to be reloaded to the library for reuse. The following table summarizes the policies mentioned in this part:
Data OS & Applications OS & Applications Data Location Local Disks Local Disks Backup Level Full Vault Copy Frequency 1 Week 1 Month Retention 4 Weeks 3 Months Media Disks (CX4-240) Off-Site Tape

Table 23: Summary Backup Policy- OS & APP

Storage Requirements Calculation The following are the calculation for the number of off-site tape
Average Occupied Space Number of Clients Total Amount of Data for Backup/ 1 Round Size of Tape Number of Required Off-Site Tape Per month Retention Total Number of Required Off-Site Tapes Total Number of Required Off-Site Tapes + One Round for Spare Table 24: Off-Site Tape Requirement-OS & APP Backup 20 GB 130 2600 GB 800 GB 2600 / 800 ~ 4 3 months (3 Rounds) 3 * 4 = 12 12 + 4 = 16

And the amount of disk space that is needed on CX4-240 to keep the backup images can be calculated as the following:
Average Occupied Space Number of Clients Total Amount of Data for Backup/Week Retention Maximum Number of backup images on Disk/Client Total amount of Disk Space Required 20 GB 130 20 * 130 = 2600 GB 4 Weeks (4 Rounds) 4 4 * 2600 = 10400 GB

Total amount of Disk Space Required + One Round for Spare (4 + 1) * 2600 ~ 12 TB Table 25: Disk Storage Requirement- OS & APP Backup

The Storage Unit Group and related Storage Units will be defined as below:
Storage Unit Group DSUG_FSBackup_1 Storage Unit Name DSU_FSBackup_1 Size 2 TB

Page 60 of 99

61

99

Storage Unit Group

Storage Unit Name DSU_FSBackup_2 DSU_FSBackup_3 DSU_FSBackup_4 DSU_FSBackup_5 DSU_FSBackup_6

Size 2 TB 2 TB 2 TB 2 TB 2 TB

Table 26: Storage Group Definition- OS & APP Backup

Servers which host business critical data and need highest availability are in this category. The OS which is installed on high-end servers reside on SAN volumes and the server boots up from SAN. To back up the OS and application of these servers; the procedure is almost the same as the previous part. One difference in this case is that these servers are installed as SAN Media Servers or SAN client; in this case the backup traffic will pass through SAN instead of LAN. The choice between SAN Media Server and SAN client depends on the size of Database on each of these servers. If the Database is huge size, well use SAN Media Server as our approach to back up the files on these servers, in other cases SAN Client method will be selected. Table 27 in the following lists the name of SAN Media Servers:

Row 1 2 3 4 5 6 7 8 9 10 11 12

Server Name bill-db-srv1 bill-db-srv2 intbill-db-srv1 intbill-db-srv2 bcc-db-srv1 bcc-db-srv2 cmptel-med-srv1 cmptel-med-srv2 incc-db-srv1 incc-db-srv2 prvs-db- srv1 prvs-db- srv2

Operating System Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10

Table 27: List of SAN Media Servers

Page 61 of 99

62

99

The backup for these systems will be scheduled for twice a week and the backup image will be stored on disk storage units that are defined on CX4-240. The retention of backup image on disk will be set to 4 weeks. Every month one copy of backup image will be copied to tape to be sent to off-site for the purpose of disaster recovery. This copy will be stored off-site for 3 months; and ae r that the tape will be returned to the library to be reused. The backup policy mentioned in this section can be summarized as the following table:

Data OS & Applications OS & Applications

Data Location SAN SAN

Table 28: Backup Policy Summary- File System for Boot ON SAN servers

Backup Level Full Vault Copy

Frequency Twice Weekly 1 Month

Retention 4 Weeks 3 Months

Media Disks (CX4-240) Off-Site Tape

As all the clients which boot from SAN are Solaris based, the policy type will be set to Standard. The same as before, well use All_Local_Drives directive for the backup selection. The same as before the option Collect disaster recovery information for Bare Metal Restore should be enabled to protect the servers using BMR. Table 29 and Table 30 in the following define the policy and schedule attributes relatively and Table 31 defines the client list for the mentioned policy.
Policy Attributes
Policy Name POL_FSBackup_BootOnSAN_1 Policy Type Standard Used Directive ALL_LOCAL_DRIVES Collect disaster recovery information for Bare Metal Restore Enabled Policy storage DSUG_FSBackup_SANBoot_1

Table 29: Policy Attributes- OS & APP Backup-Boot on SAN Servers

Schedule Attributes
Schedule Name SDL_Weekly_Full_BootOnSAN_1 Type of Backup Full Backup Frequency Biweekly Retention Four Weeks Keep true image restoration 0 Days Backup Window 00:00 ~ 08:00 Schedule On Saturdays, Tuesdays

Table 30: Schedule Attributes- OS & APP Backup-Boot on SAN Servers

Policy[POL_FSBackup_BootOnSAN_1] Client List Row 1 2 3 Server Name bill-db-srv1 bill-db-srv2 bcc-db-srv1 Operating System Solaris 10 Solaris 10 Solaris 10

Page 62 of 99

63

99

Policy[POL_FSBackup_BootOnSAN_1] Client List Row 4 5 6 7 8 9 10 11 12 Server Name bcc-db-srv2 incc-db-srv1 incc-db-srv2 cmptel-med-srv1 cmptel-med-srv2 incc-db-srv1 incc-db-srv2 prvs-db- srv1 prvs-db- srv2 Operating System Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10 Solaris 10

Table 31: Client List- FS Backup for Boot on SAN Servers

Storage Requirements Calculation The amount of disk space that is needed on CX4-240 to keep the backup images can be calculated as the following:
Average Occupied Space Number of Clients Total Amount of Data for Backup/Week Retention Maximum Number of backup images on Disk/Client Total amount of Disk Space Required Total amount of Disk Space Required + One Round for Spare 20 GB 12 20 * 12 = 240 GB 4 Weeks (8 Rounds) 8 8 * 240 = 1920 GB (8 + 1) * 240 = 2.1 TB

Table 32: Disk Storage Requirement- OS & APP Boot on SAN

The following are the calculation for the number of off-site tapes:
Average Occupied Space Number of Clients Total Amount of Data for Backup/Round Size of Tape Number of Required Off-Site Tape Per month Retention Total Number of Required Off-Site Tapes Total Number of Required Off-Site Tapes + One Round for Spare 20 GB 12 240 GB 800 GB 1 3 months (3 Rounds) 3 4

Table 33: Off-Site Tape Requirement- OS & APP BootOnSAN

The Storage Unit Group and related Storage Units will be defined as below:

Page 63 of 99

64

99

Storage Unit Group DSUG_FSBackup_SANBoot_1

Storage Unit Name DSU_FSBackup_SANBoot_1 DSU_FSBackup_SANBoot_2

Size 2 TB 2 TB

Table 34: STUG Definition- OS & APP Backup BootOnSAN

Points to be considered in this part: 1- All the servers listed in this part host the Oracle Database with large amount of Data. So it is necessary to exclude the directories and paths which include the Database. The policy mentioned in this part is designed to back up the information of file system only not the Database. 2- The volume of Backup traffic on SAN for the above policy is not considerable, as the amount of Data to be backed up is not that much. So well schedule the policy to take full backup of all the 20 servers of this type in one night. (Saturdays and Tuesdays). 3- It is supposed that the shared LUN between the two clustered servers only include the database, if there is any other data on the shared LUN which needs to be backed up using file system policy, an additional policy must be defined to back up the node that currently owns the resources. Some application servers have important files to be backed up beside the files belong to the OS and installed applications. In most of cases these are some CDR files to be processed and then imported to the Database. Another example of this data is the DUMP files of special databases which are used by specific application, DNS records, or other type of data used by fundamental services inside MCI Tohid Data Center. The backup policies to back up the important data that are stored as files on the File System of the servers are designed according to the data location (DAS, NAS and SAN) and other parameters including the desired RTO and RPO; and also the retention period specified by related system administrator. The following table is the summary of information gathering related to application servers.

Page 64 of 99

65

99

Server Name

Application Name

Type of Data

Data Files Paths

Total Size of Data Files

Data Change Frequenc y

RTO

Backup Window

RPO

Archive Needed

Encryption Requireme nt

Rating

D:\ taphome\tapapp\archive_rated CDR D:\ taphome\tapapp\archive_checked CDR D:\ taphome\tapapp\archiv_tapout D:\ taphome\tapapp\archive_readytotap D:\ taphome\tapapp\EDCH D:\ taphome\tapapp\Daily_Sent_Taps

200 GB

2Hours

00:00 ~ 06:00

1 Day

Zipped files on specified DIRs will retain for long time (Infinite)

No

Data

/storage

0.5 TB

1 Day

18:00 ~ 24:00

1 Day

No

No

CDR

/storage

2 TB

1 Day

18:00 ~ 24:00

1 Day

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

No

Table 35: Application Data Files on Servers

As we can see from the above table, the biggest part of Data are CDR and EDR files, the rest of Data files are not too much to affect the policy planning. So in order to estimate the space requirement and choosing the policy which can fit the Data backup and restore requirements the best; we have to focus on servers that store considerable volume of data.

Page 65 of 99

66

99

SenSage will be deployed as CDR archiving solution for MCI; the backup requirements are summarized in the following table for SenSage servers:

archv-anlz-srv1 archv-anlz-srv2 log-anlz-srv1 archv-colct-srv1 archv-colct-srv2 log-colct-srv1 archv-colct-srv1 archv-colct-srv2 log-colct-srv1 archv-sls-srv01 .. 10 log-sls-srv01 .. 05 archv-sls-srv01 .. 10 log-sls-srv01 .. 05

Every Friday

Full filesystem Backup

/storage

1 month On-Site Tape/ 3 months off-site

Daily

Differential filesystem Backup

/storage

1 month Disk

Every Friday

Full filesystem Backup

/storage

1 month On-Site Tape/ 3 months off-site

Every Friday

Differential filesystem Backup

/storage

1 month Disk 3 months On-Site Tape/ 4 months off-site

Monthly

Full filesystem Backup

/storage

Table 36: SenSage Backup Requirement

The volume of data stored on each SenSage server can be determined according to the Table 37, and the most important point about this data is about the number of files stored on each server. Because there is huge number of les (more than 1 million) on each of these servers to be backed up, well use the FlashBackup policy type to back up these servers. FlashBackup is a policy type that combines the speed of raw-partition backups with the ability to restore individual files.

Also as the amount of data to be baked up on SenSage servers is too much, well use SAN Style backup via SAN Client method (as described in secon 4 . T ef o e the b ) her r ackup tra w ll p c i ass through SAN network and will not affect the LAN Network.

Page 66 of 99

67

99

Storage Requirements Calculation

archv-anlz-srv1,2 log-anlz-srv1 archv-colct-srv1,2 log-colct-srv1 archv-colct-srv1,2 log-colct-srv1 archv-sls-srv01 .. 10 log-sls-srv01 .. 05 archv-sls-srv01 .. 10 log-sls-srv01 .. 05

Weekly Full

2 months On-Site Tape / 3 months off-site 2 months Disk 2 months Tape/ 3 months off-site 2 months Disk

3 * 0.5 =1.5 TB

8 * 1.5 TB

3 * 1.5 TB

Daily Differential Weekly Full Weekly Differential

10 GB 3 * 2TB = 6 TB

60 * 10 GB 8 * 6 TB 3 * 6 TB

300 GB 2 months On-Site Tape/ 3 months off-site 10 * 3TB + 5 * 2TB = 40 TB

8 * 300 GB 2 * 40 TB 3 * 40 TB

Monthly Full

Table 37: SenSage Backup Volume Calculation

According to the above calculation, the total volume requirement on Tape and Disk would be as following table:

Table 38: Backup Volume- SenSage Backup Size of Tape Total Number of Required On-Site Tapes 800 GB 140 TB / 800 GB = 180

Total Number of Required Off-Site Tapes 142.5 TB / 800 GB = 183 Table 39: Tape Requirement- SenSage Backup

Data Rate As we can see in the table Table 37: SenSage Backup Volume Calculation there are two pick times for the backup traffic of SenSage servers. Every Friday that the full backup of archv-anlz servers and archv-colct servers happen and every month when the full backup of archv-sls servers will take place; well have significant amount of backup traffic. The following table shows the volume of backup traffic on mentioned times:
Total Amount of Data for Backup (Every Friday) 7.5 TB Window Size 24 Hours Average Data Rate/Hour 320 GB/ Hour Average Data Rate/Second (mbps) 730 mbps

Table 40: Backup Traffic Rate-SenSage Servers

Page 67 of 99

68

99

As we are using SAN Style backup via SAN client to back up the data of SenSage servers, well have no bottleneck for transport network, but well need at least 2 tape drives running concurrently for this backup job, the following table shows the transfer rate for LTO-4 tape drive technology.
Drive LTO-4 Megabytes per second 120
Table 41: Tape drive data transfer rate

Gigabytes per hour, 259.2

The 60% utilization columns in the above table is conservative estimate, meant to approximate average, real-world performance of tape drives. Every month we have to take one full backup of every arch-sls server. It would be a huge amount of data to be backed up and the data rate calculation will be as follows:
Total Amount of Data for Backup (Once a month) 40 TB Window Size 24 Hours Average Data Rate/Hour 1707 GB/ Hour Average Data Rate/Second 3884 mbps Number of Tape Drives Required 1707 / 260 = 6

Table 42: Backup Rate- arch-sls Servers

Considering the data rate of each tape drive according to Table 41 well need at least 6 tape drives running concurrently for this backup job. Policy and Schedule The policies and schedules to back up the SenSage servers will be defined according to
Policy Attributes
Policy Name POL_SenSage__1 POL_SenSage__2 POL_SenSage__2 POL_SenSage__4 POL_SenSage__4 Policy Type FlashBackup FlashBackup FlashBackup FlashBackup FlashBackup Snapshot Method EMC_CLARRiiON_SnapView_Snapshot EMC_CLARRiiON_SnapView_Snapshot EMC_CLARRiiON_SnapView_Snapshot EMC_CLARRiiON_SnapView_Snapshot EMC_CLARRiiON_SnapView_Snapshot Policy storage DSUG_SenSage_1 DSUG_SenSage_1 DSUG_SenSage_1 DSUG_SenSage_1 DSUG_SenSage_1 Volume Pool

Table 43: Policy Definition- SenSage Backup

Page 68 of 99

69

99

Schedule Attributes
Schedule Name SDL_SenSage_Weekly_Full_1 SDL_SenSage_Weekly_Full_1 SDL_SenSage_Daily_Direnal _ 1 SDL_SenSage_Monthly_Full_1 Type of Backup Full Backup Full Backup Differential Backup Full Backup Frequency One Week One Week One Day One Month Multiple Copy 2 2 1 2 Retention 1 month Disk 2 months Tape 2 month Disk 2 months Tape Three Weeks 2 months Disk/ 4 months Tape 1 month Disk/ 4 months Tape Backup Window Always Always 18:00 ~ 24:00 Always Schedule On Friday 00:00 Friday 00:00 Everyday 18:00 Friday 00:00

SDL_SenSage_Weekly_Direnal _ 1

Differential Backup

One Week

18:00 ~ 24:00

Everyday 18:00

Table 44: Schedule Definition- SenSage Backup

Another two systems which contain considerable amount of data files to be backed up are Mediation and Rating servers. These two systems serve huge amount of files with small size. Currently the backup process is to compress the files to gzip and then make tar files on a daily basis (for Rating Server), and then copy the tar files to tape as backup. The current process is designed because there is no enough storage space to save more files. After the systems move to MCI Tohid Data Center, there would be much more storage space assigned to these systems and surly the current process needs to be revised. The advantage of current design is that the backup process is easier for the backup system, as we just have to back up small number of files, but the disadvantage is that the system needs to spend more resources to create tar files and later if need any single file to be restored, we have to extract a big tar file to retrieve the desired specific file. According to the feedback from MCI, the volume of files on rating and mediation servers is as following table. The retention policy for disk storage is also specified according to MCI requirement.
System Mediation Rating File Type Raw (Binary) Billing ICT Rated Daily (Raw) 150 GB 125 GB 60 GB 140 GB Number of Files/Day 120 K 120 K 110 K 2160 K Daily Compressed 42 GB 24 GB 5 GB 28 GB 7 Days Tape Backup 294 GB 168 GB 35 GB 196 GB Retention Policy 6 Months 20 Months 13 Months 20 Months Backup Storage 7.38 TB 14.06 TB 1.90 TB 16.41 TB 16.41 TB 23.35 TB Total

Page 69 of 99

70

99

System Total

File Type

Daily (Raw)

Number of Files/Day 2510 K

Daily Compressed

7 Days Tape Backup 693 GB

Retention Policy

Backup Storage

Total

39.76 TB + 5 TB (Spare) ~ 45 TB

Table 45: Mediation & Rating Storage Requirement

As we can see, Mediaon and R ang server s wl l n i eed around 40TB of d k spac e t o bac k up is their CDR files. Considering the growing size of these servers, we should consider another 5 TB as spare to meet the possible requirement in the future. Our suggested policy is to assign this amount of storage on CX4-240 to menoned ser ver s, so they can stor e the b ackup copy o thei r f CDR files on allocated disk space. This part of backup policy will be done without NetBackup intervention. The next step of backup policy is to back up the CDR files to tape. To back up the CDR files to tape, a new file processing script is needed on both Mediation and Rang ser ver s to keep the C e o 2 w eks b o e o a separ at e speci d rector y fo tape DR l s f e ef r n c i r backup purpose. It means that we should have the CDR les belong to the period of 7 to 14 days ago into one specific directory (for example /mediation/rawfiles/tapebackup & /mediation/translated/tapebackup & /mediation/ICT/tapebackup), the first directory should include raw CDR les for 7 days starng from 1 d 4 ays a to 7 d go ays a T C e fo e go. he DR l s r ach day are stored in one single directory which is named in the format of YYYYMMDD equal to the date of that specific day. NetBackup policy will back up the files in specified directory once a week (Every Saturday) to offsite volume pool dedicated for CDR files. As we can check in the Table 45, the volume of data for one week is around 700GB, which is very near to the capacity of one tape (800GB). The CDR files will be backed up to tape without compression, as the files are already stored in gzip format on Mediation and Rating servers. The retenon fo tape vol u e w ll b set to 10 year s, so the tape shoul d b stor ed at os i te r m i e e facility for 10 years. Every two weeks a Vault job should eject the tapes belong to this volume pool in order to be transported to the offsite. Table 46 and Table 47 in below can explain the policy and related schedule to back up the CDR files from Mediation and Rating servers:
Policy Attributes Policy Name POL_FSBackup_CDRFiles_1 Policy Type Standard Volume Pool VPL_Osite_CDRFiles_1 Schedule SDL_Weekly_Full_CDRFiles_1

Table 46: Policy Attributes- Mediation & Rating Servers

Page 70 of 99

71

99

Schedule Attributes Schedule Name SDL_Weekly_Full_CDRFiles_1 Type of Backup Full Backup Frequency One Week Retention 10 years Backup Window 00:00 ~ 08:00 Schedule On Saturday 00:00

Table 47: Schedule Attributes- Mediation & Rating Servers

As mentioned on the above the disk storage and tape media requirement can be summarized as Table 48 in the following:
Total Space Needed on CX4-240 Number of required offsite Tape Media/Round Number of required offsite Tape Media for one year Tape Volume Pool Name for offsite backup 40 TB 1/ Week 50 VPL_Osite_CDRFiles_1

Table 48: Storage Requirement Summary

According to Table 35: Application Data Files on Servers ; there are other application servers that need to be backed up regularly. The amount of data files on these servers are not too much to affect the network traffic, so well use the LAN Backup method to back up the data on these servers. Also these servers have the same RPO, so we can define the policy with the same schedule for all of them to back up the data on different servers. To satisfy different RTO requirements for different servers in this group, all the backup images will be stored on disk storages (CX4-240); so the minimum time is needed for recovery of the required files. Another consideration to store the backup images on disk is that, in most of the cases the request for restore belongs to individual files not the entire backup image. By having backup on disk, we can guarantee efficient recovery of individual files in a short time. The common schedule to back up all these systems is to have full backup of specified directories on weekly basis and then incremental backups on daily basis. Using the above mentioned policy, in worst case the system can be recovered to the state of one day before; also we can recover the data to any specic date within the last 30 days.
Row 1 2 Server Name Intbill-web-srv lms-netmgt-srv Policy [POL_APPData_STD_1] Client List Operating System Include Paths Solaris 10 /icb/bmd/rating/processed /var/adm Solaris 10 /var/log Estimated Size 200 GB 5 GB

Page 71 of 99

72

99

Table 49: Client List-1

Row 1 2

Server Name tvl-mon-srv tvl-netview-srv

Policy [POL_APPData_WIN_1] Client List Operating System Include Paths Windows 2003 Windows 2003 D:\ taphome\tapapp\archive_rated CDR D:\ taphome\tapapp\archive_checked CDR D:\ taphome\tapapp\archiv_tapout D:\ taphome\tapapp\archive_readytotap D:\ taphome\tapapp\EDCH D:\ taphome\tapapp\Daily_Sent_Taps
Table 50: Client List-2

Estimated Size 2 GB 2 GB

tap-out-srv

Windows 2003

200 GB

tap-in-srv

Windows 2003 Policy Attributes Policy Type Data Classification Standard No Data Classification MS-Windows No Data Classification
Table 51: Policy for other servers

800 GB

Policy Name POL_APPData_STD_1 POL_APPData_WIN_1

Policy Storage DSUG_APPData_1 DSUG_APPData_1

Schedule Name SDL_APPData_Weekly_Full_1 SDL_APPData_Daily_INC_1

Schedule Attributes Type of Backup Frequency Retention Full Backup One Week 4 Weeks Incremental Backup One Day 4 Weeks
Table 52: Schedule for other servers

Backup Window 00:00 ~ 08:00 00:00 ~ 08:00

Schedule On Friday 00:00 Every night 00:00

Every month one copy of the full backup image will be duplicated to off-site tape using Vault Option. The retention of the off-site tape for these servers will be set as 3 months. So the image will expire ae r 3 m nt hs and the tape need to be returned back to the site. For some of o servers in this part, they need to retain the data for archival purpose for long time. The requirement belongs to the CDR files on interconnect servers plus tap-in and tap-out servers. As far as we know these are the same files as stored on Rating server, if so we no need to keep the backup of these servers separately for archival. If still need to have separated copy for archival of these servers, we suggest using separated directories to store the files for each year. Then we can add another backup policy to back up the data on those specific directories at the end of each year. The backup will be stored on osite volume and the retenon w ll be set to 10 i years or infinite to keep the archival copy for long time. All mentioned policies in this part can be summarized in the following table:
Page 72 of 99

73

99

Data Application Data Application Data Application Data

Backup Level Full Incremental Vault Copy

Table 53: Policy Summary- Other Servers

Frequency 1 Week 1 Day 1 Month

Retention Media 4 Weeks Disks (CX4-240) 4 Weeks Disks (CX4-240) 3 months Off-Site Tape

Storage Requirements Calculation Table 54 in the following shows the calculaon fo the space requi rem nt o d sk (C -240): r e n i X4
Total Amount of Data for Full Backup/Round Total Space Requirement for full Backup Images on Disk (Retenon = 4 W ) Total Amount of Data for Incremental Backup/Week Total Space Requirement of Incremental Images on Disk (Retenon = 4 W ) Total Space Required on Disk (Full + Incremental)
Table 54: Disk Space Requirement

1300 GB 4 * 1300 = 5 TB 20% * 1300 GB = 260 GB 4 * 260 = 1 TB 5 + 1 = 6 TB

And Table 55 shows the number of required tapes. It should be noted that these tapes doesnt include the tapes required for archival purpose. If the archival is needed the number of tapes should be added.
Total Amount of Data for Full Backup/Round Size of Tape Media Number of required tapes/month Total number of required tapes
Table 55: Tape Requirement

1300 GB 800 GB 2 3 * 2 + 2 (one round as spare) =8

Table 56 in the following shows the definition of storage units on disk to be allocated to this type of backup. We Consider 2TB extra space for any other servers that may added to this list later.

Storage Unit Group DSUG_FSBackup_DataFiles_1

Storage Unit Name DSU_FSBackup_DataFiles_1 DSU_FSBackup_DataFiles_2 DSU_FSBackup_DataFiles_3

Size 2 TB 2 TB 2 TB

Table 56: Storage Units on Disk

Page 73 of 99

74

99

Another type of data that needs to be backed up is the data which are stored in different databases. This sort of data is the most important data which create the most value to the business. This type of data should be recovered at minimum possible time with the least possible data loss. Currently there are three different databases used by MCI: Oracle, MSSQL, and MySQL. The backup method for each type of mentioned databases is explained in previous chapter named Different backup methods. In this part well explain about the specific policies for each database and well do the calculation of storage requirement. To create backup of Oracle Databases well use RMAN and NetBackup for Oracle to create the backup set of Oracle databases as described on secon 4 . To accomplish this type of backup we need to install the NetBackup Enterprise Client and NetBackup Application and Database pack on the server containing Oracle database.

Table 57 in the following lists all the Oracle databases in MCI Tohid Data Center which need to

be backed up.
Server Name tap-out-srv tap-in-srv sa-db-srv intbill-db-srv incc-db-srv bcc-db-srv vchbnk-db-srv DB Name RM RM SA ICT prepaid bc prepaid DB Type & Release Oracle 9.2 Oracle 10.1.0 Oracle 9.2.0.1 Oracle 10.2.0.4 Oracle 9.2.0.1 Oracle 9.2.0.4 Oracle 9.2.0.4 DB Size 108 GB 64 GB 500 GB 20 TB 400 GB 1 TB 1 TB Data Change Frequency Daily Daily Continually Continually Continually Continually RTO 2 Hours 4 Hours 2 Hours 8 Hours 2 Hours 2 Hours Backup Window 00:00 ~ 06:00 00:00 ~ 06:00 Always 00:00 ~ 06:00 Always Always RPO Almost Zero Almost Zero Almost Zero Almost Zero Almost Zero Almost Zero Almost Zero Archive No No No No No No No Encryption No No No No No No No

Daily 4 Hours Always Table 57: Oracle Databases

Beside the servers mentioned on the above table, there would be other Oracle databases inside MCI Data Center, which we dont know exact information, but we have the estimated size of Database. Table 58 in the following lists the servers and related database size on each server.
Server Name bill-db-srv Estimated Size of DB 18 TB

Page 74 of 99

75

99

Server Name simbnk-db-srv vas-db-srv

Estimated Size of DB 500 GB 2.5 TB

otp-db-srv 1 TB Table 58: Estimated size of Oracle Databases

Among all listed databases the database ICT reside on intbill-db-srv server and the databases on bill-db-srv, are very huge size. To back up these databases we set up the mentioned servers to function as SAN media servers. As mentioned on previous chapter we can use SAN Style backup via SAN Media server to back up these databases. It will help us to assign the tape resources to the server itself, and the server can copy the backup data directly to tape. For all large databases like ICT database the full backup (level 0) of database will occur every two weeks. The backup image will be copied to tape drives directly with the retenon set as 1 month. The full backup of large databases will be arranged to be distributed in different times as much as possible. Every month one copy of full backup will be duplicated to off-site tape using vault. The retention period for osite tape will be set as 2 months. Every day one incremental backup (Level 1) will back up the changes since full backup. This copy of backup will be sent to the disk storages on CX4-240. As the size of incremental backup is not too much it can be stored on disk, also having backup on disk will help us to lower the up every two hours to the disk. Based on the requirements from MCI, the backup plan for normal size databases will be as follows: A full backup will back up the database on a weekly basis. This copy will be saved on CX4-240 and the retention of this backup image is set to 2 weeks. Every 2 weeks the image of full backup on the disk would be duplicated to off-site tape using vault copy; the retention period for this tape backup will be set to 1 month. The incremental backup of database will take place every night between 00:00 to 06:00. The incremental backup will be saved on CX4-240 with the retention of 1 week. In this way we can recover the database to any specic date on last week with RTO < 2hrs; because all backup images stored on disk.
Page 75 of 99

76

99

The database backup can be summarized in the following table: Large Size Databases
Data Database Database Backup Level Full (Level 0) Incremental (Level 1) Frequency 2 Weeks Daily Retention 1 month 2 Weeks Media onsite tape Disks (CX4-240)

Database

Archive Logs

Every 2/4 Hours

Disks (CX4-240)

Database

Vault Copy

Every month

2 months

Offsite tape

Table 59: Oracle Backup Summary-Large Size Database

Normal Size Databases


Data Database Database Backup Level Full (Level 0) Incremental (Level 1) Frequency 1 Week Daily Retention 2 Weeks 1 Week Media Disks (CX4-240) Disks (CX4-240)

Database

Archive Logs

Every 2 Hours

Disks (CX4-240)

Database

Vault Copy

Every 2 weeks

1 month

Off-site Tape

Table 60: Oracle Backup Summary- Normal Size Database

The calculation for required storage for the backup of large size databases will be as following table:

Large-Size DBs (intbill-db-srv, bill-db-srv) Total Amount of Data for Full Backup/Round Total Space Requirement for full Backup Images on-site tape Total Amount of Data for Incremental Backup/Week (Size on Disk) Total Space Requirement on Disk (retenon = 2 w eks) e

(20+18) TB 38 TB 2 * 38 TB = 76 TB 10% * 38 TB = 3.8 TB 2 * 3.8 TB = 7.6 TB

Page 76 of 99

77

99

Total Space Requirement for full Backup Images off-site tape Size of Tape Media (Compress 2:1) Number of required on-site tape Number of required off-site tape

2 * 38 TB = 76 TB 1600 GB 76 * 1024 / 1600 = 49 76 * 1024 / 1600 = 49

Table 61: Storage Requirement- Large Size Oracle DB

For normal size databases the calculation will be as follows:


Total Amount of Data for Full Backup/Round Total Space Requirement for full Backup Images on disk (retenon = 2 w ) Total Amount of Data for Incremental Backup/Week Total Space Requirement on Disk (Full + Incremental) Size of Tape Media (Compress 2:1) Number of required off-site tape/ Week Total Number of required off-site tapes (Retenon = 2 ) w 8 TB 2 * 8 TB = 16 TB 10% * 8 TB = 0.8 TB 16 TB + 0.8 TB = 16.8 TB 1600 GB 8 * 1024 / 1600 = 6 6 * 2 = 12

Table 62: Storage Requirement- Normal Size Oracle DB

There are only two SQL server databases inside MCI Tohid Datacenter. The information is as the following table:
Server Name Database Name Database Type & Release DB Size Data Change Frequency RTO Backup Window RPO Archive Encryption

Table 63: MS-SQL Server Database

As menoned o secon 4, th MS-SQL databases should be backed up using MS-SQL-Server as n e the policy type. The policy to protect this database will be defined to have full backup of the database on weekly basis and differential backup on daily basis. All the backup images will be copied to disk storages on CX4-240 and the retenon w ll b set as 2 w eks. The im ge o fu l i e e a f l backup will be copied to osite tape as well; the retenon fo o i te tape w ll b 2 w eks. T r s i e e he following two tables show the policy and schedule attributes for this server.
Policy Name Policy Attributes Policy Type Policy Storage

POL_DBBackup_MSSQL_1 MS-SQL-Server Table 64: Policy Attributes- MSSQL Server

Schedule Name
SDL_MSSQL_Weekly_Full_1 SDL_MSSQL_Daily_Di_1

Schedule Attributes Type of Backup Frequency


Full Backup Differential Backup One Week One Day

Retention
2 Weeks 2 Weeks

Backup Window
00:00 ~ 08:00 00:00 ~ 08:00

Schedule On
Thursday 00:00 00:00

Page 77 of 99

78

99

Table 65: Schedule Attributes- MSSQL Server

Another type of database which is used across MCI Tohid Datacenter is MySQL. Table 66 in the following lists the MySQL databases:
Server Name Database Name mcireg payment_mcireg reg-db-srv evocher_ice payment_ice postpaid portal-db-srv myisam Database Type & Release MySQL 5.0.22 MySQL 5.0.22 MySQL 5.0.22 MySQL 5.0.22 MySQL 5.0.22 MySQL 5.0.77 DB Size 200 MB 200 MB 1 GB 1 GB 400 MB 2 GB Data Change Frequency Continually Continually Continually Continually Continually Continually RTO 2 Hours 2 Hours 2 Hours 2 Hours 2 Hours 2 Hours Backup Window Always Always Always Always Always 00:00 ~ 06:00 RPO Almost Zero Almost Zero Almost Zero Almost Zero Almost Zero Almost Zero Archive No No No No No No Encryption No No No No No Yes (password of subscribers)

Table 66: MYSQL Database List

NetBackup didnt have any agent to support the hot backup of MySQL so far, but Symantec has announced that they have added a new agent to support the hot backup of MySQL databases using NetBackup. Anyway so far there are not so much information about this new agent and its compatibility with different releases of MySQL databases. Our suggested solution is to use mysqldump or any other ways chosen by DB administrator to dump out the database to files on disk and then backup the file using file system backup technology of NetBackup. The dump process must be managed by relevant database administrator of the system. After that NetBackup can be scheduled to have a full backup of the directory that stores the dump files on a daily basis. As the volume of data is not too much in this part we no need to consider the data rate calculations or storage requirements. The storage that is used for backup is the same as the storage for application data files on other servers as mentioned on section 4.2.3. The database myisam which includes the user/pass for the subscribers need to be encrypted when backed up to tape. For this purpose the MSEO will be configured on media-nbkp-srv1, to make the virtual tape which is encryption enabled. The data from the mentioned server will be encrypted using AES 256 and then will be copied to tape. The domain controller of MCI Tohid Datacenter is named rslv-dns-srv. This system is backed up using weekly full backup with BMR option enabled. Beside the mentioned policy, well

Page 78 of 99

79

99

define a separated policy to back up the Active Directory of the domain controller in order to enable the administrator to restore any specific item within Active Directory. As mentioned on 3.8, the policy to back up the Active Directory must use the option named enable granular recovery . The rest is almost the same as other policies to back up the files on the le system. Every night at 00:00 Oclock well back up the Acv e D rector y o the d a i n i f om controller, and the backup image will be retained for two weeks. The backup image will be stored together with other backup images of file system backup as mentioned on section 4.2.3. The following two tables show the policy and schedule that will be defined for this purpose:
Granular Recovery Policy Type Policy Storage Option MS-Windows Enabled DSUG_APPData_1 Table 67: Active Directory Policy Attributes

Policy Attributes

Policy Name POL_APPData_ActDir_1

Backup Selection Directive System_State

Schedule Attributes
Schedule Name SDL_ActDir_Daily_Full_1 Type of Backup Frequency Retention Backup Window Full Backup Daily 2 Weeks 00:00 ~ 08:00 Table 68: Active Directory Schedule Attributes Schedule On 00:00

Page 79 of 99

80

99

Vault is an extension to NetBackup that automates selection and duplication of backup images and ejection of media for transfer to and from a separate, off-site storage facility. NetBackup Vault also generates reports to track the location and content of the media. The media are sent off-site to be used for disaster recovery and also for some other archiving purposes like the regulatory archival purpose. NetBackup Vault uses existing NetBackup functions for all operations, such as duplication of images, media control, reporting, and ejecting and injecting of tapes. Information from Vault is integrated with other NetBackup components. Figure 14 shows the NetBackup, Media Manager, and Vault relationships.

Figure 14: Relationships of Vault and other NetBackup components

Page 80 of 99

81

99

NetBackup Vault option needs to be licensed separately before it can function. The license keys which are considered for the activation of Vault are: 1- VERITAS NetBackup Vault Base This license is the basic license to activate the Vault option and it will allow the use of four physical tape drives inside the tape library to be used as the source or destination for Vaulting. 2- Symantec VERITAS NetBackup Vault Addional D i v * 2 r e 0 This one is used to enable the Vaulting option for the rest of tape drives inside the tape library. So all the 24 tape drives of the IBM tape library can be used as source or destination for vault copy, and they all can be ejected by Vault. The main function of vaulting is to automate the job of coping backup images on specific tape volumes, eject them from the tape library to be sent to the offsite location. This process can be done using the original backup images or duplicated backup images. Original backup images are those which are created during the running time of the backup policy job. We can create multiple copies of the backup image concurrently, one of them can be written to specific tape volumes to be sent to offsite via vaulting process at later stage. Duplicated images are those which are created by vault after the backup policy has made the backup image on disk or tape. This image duplication will be done using vault option. One of the most important choices for Vault planning is whether to send original or duplicate images off site. To minimize the tape drive usage and shorten the time for the backup process well use the following solution: For most of the cases (File system backup and normal database backup), well use disk staging. Disk staging means that the backup images are written to the disk storage units on CX4-240 during a NetBackup policy job and then the images will be copied to tapes during a Vault job. So the Vault sessions will duplicate the original disk backup images to offsite volume media. This strategy shortens the backup time and minimizes the tape drive usage.

Page 81 of 99

82

99

For databases with very large size, well make two original copies of the backup image concurrently. One copy will remain onsite and the other copy will use the Offsite Volume Pool to be used by Vault. The above solution will eliminate the need to use tape to tape duplication, and as a result we can save more tape drives to be assigned for normal daily backup and restore process. Vault process includes the following steps: Choosing backup images Duplicating backup images Backing up the NetBackup catalog Ejecting media Generating reports Details of each step for vault process are explained in the following: 1. Choosing backup images The first step of the Vault process is to choose the backup images that are candidates to be transferred off site. We must configure the image selection for every Vault job. This process is done using Vault profile. A Vault profile is a set of rules for selecting images, duplicating images, and ejecting media. Vault uses the criteria in a Vault profile to determine which backup images are candidates to send off site. 2. Duplicating backup images In the second step of the Vault process the backup images that are candidates to be transferred off site are duplicated. Image duplication is optional. As mentioned before for normal filesystem backups we first backup to disk and then duplicate the image to offsite tape. For Large size databases well make two concurrent copies by backup policy, so in this case we just simply skip the duplication step. The volume pools that will be used for offsite storage will be defined as Table 69 in the following:

Page 82 of 99

83

99

Volume Name VPL_OffSite_Catalog VPL_OffSite_DBBackup VPL_OffSite_FSBackup

Table 69: Offsite Volume Pools

Purpose For Vault catalog backups For the database backups to be sent offsite. For the filesystem backups to be sent offsite

So the backup images which are originally written to these volume pools or duplicated to these volume pools, will be sent to offsite using vault. 3. Backing up the NetBackup catalog In the third step of the Vault process the NetBackup catalog is backed up. Backing up the catalog is optional. However, vaulting a catalog backup with the data can help us recover from a disaster more efficiently. This process is different with the routine NetBackup catalog that we create regularly. This is because NetBackup catalog backup does not include the latest information about duplicated media and media that are sent offsite after recent vault operation. Vault creates its own catalog backup with up-to-date information. We need to create a dedicated volume pool for Vault catalog backups. This pool is named VPL_OffSite_Catalog. Well use this pool only for the Vault catalog backups; it will help us to assign specific media tapes to this pool for easy finding at future (for example in case of disaster recovery). The retention of NetBackup Catalog backup will be set to two months, so we can maintain four most recent catalog backups. Table 70 in the following shows the tape requirements and other information for Vault Catalog Backup.
Policy Name Policy Type Schedule Frequency Volume Pool Name Retention Estimated Size of NetBackup Catalog Size of tape medai Number of tapes required/round Total required tape media POL_Vault_CatalogBackup_1 Vault-Catalog-Policy 2 Weeks VPL_OffSite_Catalog_Backups 2 Months 500 GB 800 GB 1 4

Table 70: Vault Catalog Backup

Page 83 of 99

84

99

4. Ejecting media In the fourth step of the Vault process the media that should be transferred to secure storage is ejected. Media can be ejected automatically by a scheduled Vault job or manually after the job has completed. We suggest ejecting the tapes two times a month manually. Every first day and the 15th day of the month, the eject process should be run manually. This way we can eject the tape media, the same time as we want to deliver the tapes to offsite storing location. The eject process for all the Vault jobs can be done through NetBackup Administration Console. Figure 15: Vault Eject Interface in below shows the eject interface on NetBackup Administration Console.

Figure 15: Vault Eject Interface

So the eject operation for multiple Vault jobs will be consolidated into a single manual eject operation. After the eject operation the tapes should be collected from the I/O slots of the IBM tape library and be packed to be sent to offsite facility. The packing and delivery process should happen in the same day as ejecting process to eliminate the risk of losing media tapes or overwriting by mistake.

Page 84 of 99

85

99

5. Generating reports In the fifth step of the Vault process reports are generated. Reports track the media that Vault manages. We can use the reports to determine which media should be moved between Tohid site and the off-site storage location. We can also determine when the moves should occur. Reports can be generated as part of the Vault job or manually after the job is finished. The next part of this section well explain about different report types and the purpose for each of them. Those tapes which are sent to offsite should be returned back to Tohid site after the backup image is expired. The retention for each vault job is specified in section 4. For most of the offsite tapes we set the retenon to thr ee m nt hs ; it m ans a 3 mn t hs th t a o e er o e pes sh d be oul returned back to site. As it is explained in the next part, we use the Vault reports to determine which tapes should be returned back to the site. The list of tapes to be recalled will be sent to offsite operators, and the tapes should be collected based on the list. After tapes media are returned back to the site, they should be injected into the robot and be added to the scratch pool for reuse. If the tape is returned back to the site before backup image expiration date, for example, a volume is recalled to be used for a restore operation; we should revault the media. It means the media should be ejected manually and be sent to offsite, to be stored till the image expiration date. The Vault Management will be configured to send an email notification to the system administrator or a group of backup operators to provide a summary of the vault session and the status of the eject job when the eject job is completed. NetBackup Vault option provides different kinds of reports, which help us to track the location of the tape media, manage the sending and recalling tapes to and from the offsite facility, and other reports which let us to monitor the vault process to make sure if all the steps are performed without any error. The reports that are provided by Vault can be categorized as below:

Page 85 of 99

86

99

Reports for media going off-site Reports for media coming on-site Inventory Reports Non-Vaulted Images Exception Report Each mentioned category on the above includes different subcategory reports. They vary in the amount of detail included in each report and also the purpose of the report. In the rest of this part well explain about the purpose of each report and how we are going to use them to manage the whole Vault process. The reports for media going offsite show media that have ejected from the robot and are transported off-site. Among different reports which are available in this category, well use three of them more often to control the outgoing tape media. The Picking List for Robot report shows the volumes ejected from the robot that should be transported off-site. This report is sorted by media ID and should be used by the operations staff as a checklist for media that has ejected from the robots. The fields available in the report would be as following table, so the backup operators can use it as their check list to collect the ejected media from the robot.
Images Ejected Expiration Mbytes Category Media Robot Slot ID

Table 71: Picking List for Robot report

The Distribution List for Vault report shows the volumes that have ejected from the robot and are transported off-site. This report is sorted by offsite slot number and should accompany the media that is destined for the offsite vault. The receiving person on the offsite should use this report to verify that all the volumes listed were actually received.

Page 86 of 99

87

99

The Detailed Distribution List for Vault report shows the volumes that have ejected from the robot and are transported off-site. This report is similar to the Picking List for Robot and Distribution List for Vault reports except that it includes detailed information about the images on each volume. This report is useful at a disaster recovery site. This report should be generated after each eject process and the report should be uploaded into a secure server at offsite location or any other secure server. In case of disaster recovery the report can be referred to in order to find out the content of each offsite tape volume. The reports for media coming on-site show the volumes that are being requested back from the offsite vault. These reports can be generated before or after media has ejected for the current Vault session. The Picking List for Vault report shows the volumes that are being requested back from the offsite vault. This report should be sent offsite to the operator at offsite location. Volumes are listed on this report because Vault determined that they are in an offsite volume group and that all images have expired. When Vault identifies these volumes, it changes the Return Date field for the media and adds the media ID and date requested to this report. The inventory reports show the location of the media. These reports are not generated until the media have been ejected. The Vault Inventory (or Inventory List for Vault) report shows media that are offsite and media being sent offsite (outbound media in transit to the vault). Every 3 months this report should be generated and be sent to osite. The operators at osite location should check the physical inventory of tape media stored at offsite against the list of tapes output by the vault inventory report. So they can verify that they have the volumes that Vault indicates as offsite. Any inconsistency between the physical inventory and the report should be reported to site managers and needs to be investigated carefully to be resolved.
Page 87 of 99

88

99

The Lost Media report lists expired media that has not been returned from the offsite. Media can get lost for various reasons. The Lost Media report should be generated every month to investigate if any offsite tape with expired images is missed to be returned back to the site. The list should be sent to the offsite operators to recall media that has not been returned but should have been returned.

NetBackup can use variety of different storage systems to store the backup images. The storage systems are those which are used by NetBackup as destination of backups not the source of backup. The data that is generated from a backup job or another type of job is recorded in storage. A storage destination can be tapes or disks. We must define storage destinations with the Storage utility before a backup job or another type of job can be run. For MCI Tohid Data Center well have two storage systems: 1- EMC CLARiiON CX4-240 2- IBM TS3500 Tape Library Well use Storage utility to define two different storage configurations: Storage Units The primary storage destination is a storage unit. Storage units can be included as part of a storage unit group. A storage unit is a label that NetBackup associates with physical storage. Storage Unit Groups Storage unit groups let us identify multiple storage units as a group. How the storage units are selected within the group is determined when the group is created. The creation of any storage unit type consists of the following general steps: 1. Name the storage unit. 2. Choose the storage unit type: Media Manager, disk, or NDMP.
Page 88 of 99

89

99

3. Select a media server. The selection indicates that the media server has permission to write to the storage unit. We can select multiple servers if needed. 4. Indicate the destination where the data is written. NetBackup supports the configuration of different types of disk storages including AdvancedDisk, BasicDisk, NearStore, OpenStorages, and etc. Among different types of disk storages the BasicDisk is the easiest one to be configured and it needs no additional license. A BasicDisk type is any simple disk volume that is made available to NetBackup as a target. It could be a mounted volume from a NAS share or a SAN attached volume. Storage unit consists of a directory that is exposed as a file system to a NetBackup media server. In our topology the EMC CLARiiON CX4-240 will be congured as BasicDisk storage. Configuring BasicDisk storage No special configuration is required for BasicDisk storage. The directory is specified when the storage unit is configured. NetBackup stores backup data in the specified directory. For disk storage, NetBackup permits an unlimited number of disk storage units.
Storage Unit Group Name Storage Unit Type

Storage Unit Names


DSU_FSBackup_1 DSU_FSBackup_2 DSU_FSBackup_3 DSU_FSBackup_4 DSU_FSBackup_5 DSU_FSBackup_6 DSU_FSBackup_SANBoot_1 DSU_FSBackup_SANBoot_2

Disk Type

Storage Device

Media Server

Size

DSU_FSBackup_DataFiles_1 DSUG_FSBackup_DataFiles_1 DSU_FSBackup_DataFiles_2 DSU_FSBackup_DataFiles_3

Page 89 of 99

90

99

Storage Unit Group Name

Storage Unit Names

Storage Unit Type

Disk Type

Storage Device

Media Server

Size

Table 72: Disk Storage Configuration

The summary of disk space on CX4-240 will be as following table:


Total RAW Space Total Useable Space (RAID 5) Space Allocation Netbackup Media Server- LAN Management Netbackup Media Server- SAN Management 180 TB 120 TB Netbackup SAN Media Servers Mediation + Rating Server + Extra Space as Spare Total Allocated Space Unassigned Space 12 TB 30 TB 8 TB 45 TB 95 TB 25 TB Ratio 10% 25% 7% 38% 80% 20%

Table 73: Disk Storage Space Allocation

To configure the tape library for NetBackup well use Device Configuration Wizard to add, configure, and update the Robots, Tape Drives, and Shared drives. In this way the wizard discovers the devices that are attached to the media servers and helps us to configure them. The robotic control for the robot is on the NetBackup master server (master-nbkp-srv), and all the 24 tape drives of the library must be seen by NetBackup media servers (media-nbkp-srv1, 2) and SAN Media Servers. To configure and use a shared drive, a Shared Storage Option license is required on each master server and media server. Media sharing allows media servers to share media for write purposes (backups). Media sharing provides the following benefits: Increases the utilization of media by reducing the number of partially full media. Reduces media-related expenses because fewer tape volumes are required and fewer tape volumes are vaulted. Reduces administrative overhead because we inject fewer scratch media into the robotic library.

Page 90 of 99

91

99

Increases the media life because tapes are mounted fewer times. Shared Storage Option The Shared Storage Option allows multiple NetBackup media servers to share individual tape drives. NetBackup automatically allocates and unallocates the drives as backup and restore operations require. The Shared Storage Option Key is ordered for all the 24 drives of IBM Tape Library in MCI Tohid Datacenter. The license key allows drive sharing between NetBackup media servers and all SAN media servers. Shared Storage Option Architecture The NetBackup Enterprise Media Manager (EMM) manages media information. The Enterprise Media Manager also is the device allocator (DA) for shared drives. To coordinate network-wide allocation of tape drives, the NetBackup Enterprise Media Manager (EMM) manages all shared tape requests in a SAN. EMM responds to requests from multiple instances of media servers, and NetBackup SAN media servers. EMM maintains shared drive and host information. Information includes a list of hosts that are online and available to share a drive and which host currently has the drive reserved. To name the tape drives of IBM Tape Library inside NetBackup we follow the default global rule. The default, global drive name rule creates names in the following format: Vendor ID.product ID.index As a result the Ultrium4 tape drives of IBM TS3500 will be named as following table:
Row 1 2 3 4 5 6 7 8 Tape Drive Tape Drive#1 Tape Drive#2 Tape Drive#3 Tape Drive#4 Tape Drive#5 Tape Drive#6 Tape Drive#7 Tape Drive#8 Location Base Frame Base Frame Base Frame Base Frame Base Frame Base Frame Base Frame Base Frame Name IBM.ULT3580-TD4.001 IBM.ULT3580-TD4.002 IBM.ULT3580-TD4.003 IBM.ULT3580-TD4.004 IBM.ULT3580-TD4.005 IBM.ULT3580-TD4.006 IBM.ULT3580-TD4.007 IBM.ULT3580-TD4.008

Page 91 of 99

92

99

Row 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Tape Drive Tape Drive#9 Tape Drive#10 Tape Drive#11 Tape Drive#12 Tape Drive#1 Tape Drive#2 Tape Drive#3 Tape Drive#4 Tape Drive#5 Tape Drive#6 Tape Drive#7 Tape Drive#8 Tape Drive#9 Tape Drive#10 Tape Drive#11 Tape Drive#12

Table 74: Tape Drives Naming

Location Base Frame Base Frame Base Frame Base Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame Expansion Frame

Name IBM.ULT3580-TD4.009 IBM.ULT3580-TD4.010 IBM.ULT3580-TD4.011 IBM.ULT3580-TD4.012 IBM.ULT3580-TD4.013 IBM.ULT3580-TD4.014 IBM.ULT3580-TD4.015 IBM.ULT3580-TD4.016 IBM.ULT3580-TD4.017 IBM.ULT3580-TD4.018 IBM.ULT3580-TD4.019 IBM.ULT3580-TD4.020 IBM.ULT3580-TD4.021 IBM.ULT3580-TD4.022 IBM.ULT3580-TD4.023 IBM.ULT3580-TD4.024

Each Ultrium data, cleaning, and diagnostic cartridge that is processed by the TS3500 Tape Library must bear a bar code label. The label contains: A volume serial (VOLSER) number that we can read A bar code that the library can read When read by the library's bar code reader, the bar code identifies the cartridge's VOLSER to the tape library. The bar code also tells the library whether the cartridge is a data, cleaning, or diagnostic cartridge. The bar code scan results associate the slot number and the bar code with the media in that slot. After that NetBackup obtains bar code and slot information from the tape library. NetBackup uses the information of VOLSER provided by the tape library for the following advantages: Automatic media ID assignment More accurate tracking of volume location Increased performance Well have three different cartridges to be used at MCI:

Page 92 of 99

93

99

1. Data Cartridges: To read and write Data (will be used to store backup images and for recovery) 2. Cleaning Cartridges: To clean the head for tape drives. 3. Diagnostic Cartridges: Reserved for diagnostic purposes only. The VOLSER of cleaning cartridges begin with CLNI, and the VOLSER of diagnostic cartridges begins with DG, so they can easily be distinguished by other data cartridges. The VOLSER for data cartridges will be like this xxxxxxL4, which x represents numeric characters 0-9 as the serial number of tape cartridge and the L4 means the tape cartridge is Utlrium4 tape cartridge. For example the VOLSER of one cartridge is 000125L4. A tape volume is a data storage tape or a cleaning tape. NetBackup assigns attributes to each volume and uses them to track and manage the volumes. Attributes include the media ID, robot host, robot type, robot number, and slot location. The media ID is the number that NetBackup assigns to each tape cartridge to track it within the tape library and even when it is sent offsite. To assign the media ID with each tape cartridge well use the media ID generation rules on NetBackup to automatically assign the media ID to each tape cartridge which is newly loaded to the tape library using the VOLSER of tape cartridge. As menoned b o e the V SER o IB tape car tri d ef r OL f M ges ar e in the fo m t o XXXXXXL4, w r a f e can define the rule to extract the first six characters of the VOLSER to be chosen as the media ID for the tape volume.
Robot Number Barcode Length Media ID Generation Rule

Table 75: Media ID Generation Rule

A volume pool identifies a set of volumes by usage. When we add media to NetBackup, they should be assigned to a volume pool before they can be used. By default, NetBackup creates the following volume pools:

Page 93 of 99

94

99

Volume Pool Name NetBackup DataStore CatalogBackup None

Purpose The default pool to which all backup images are written (unless another pool is specified) For DataStore use. For NetBackup catalog backups. For the volumes that are not assigned to a pool.
Table 76: Default Volume Pools

We also need to configure other volume pools for different purposes. The following table lists the volumes that we will define for NetBackup.
Volume Pool Name VPL_FSBackup VPL_OffSite_FSBackup VPL_SenSage VPL_OffSite_SenSage VPL_OffSite_CDR_Archive VPL_DBBackup VPL_OffSite_DBBackup VPL_OffSite_Catalog Scratch Purpose For all file system backup store onsite For all file system backup store offsite For all SenSage backups store onsite For all SenSage backups store offsite For all CDR archives (Mediation & Rating servers) store offsite For all Database Backups store onsite For all Database Backups store offsite For NetBackup Vault Catalog backup To add volumes to other pools. Table 77: Volume Pools to be defined

The scratch pool is a special volume pool. The scratch pool will be defined so NetBackup can transfer volumes when a volume pool has no volumes available. NetBackup returns expired media to the scratch volume pool automatically. We only assign specific tape media to VPL_OffSite_Catalog and CatalogBackup volume pools. For example tape cartridges with VOLSER 000100L4 to 000110L4 to CatalogBackup volume pool (According to real bar code of the tapes, can be determined after tape delivery to site). This is because the Catalog Backup is the most important for the recovery of backup images, so if we have specific tape media assigned to catalog backup it will help to find the catalog backup much easier even in case of disaster. To use NetBackup to manage the allocation of volumes to volume pools, we dont add any volumes to the defined volume pools; instead well add all the volumes to the scratch pool. Then NetBackup moves volumes to the other pools as volumes are needed.

Page 94 of 99

95

99

Each tape drive needs head cleaning after specific amount of read/write activity. The tape drives need to be cleaned, to eliminate the errors of writing or reading data. NetBackup supports different type of drive cleaning, including reactive cleaning, operator initiated cleaning and frequency based cleaning. The recommended solution for drive cleaning by NetBackup is reactive cleaning which is known as Tape Alert cleaning as well. In this solution the drive determines and initiates the cleaning when needed. Because TS3500 roboc l ib ar y and LTO4 tape dr i v suppor t the TapeAlert r es capability, the NetBackup can be configured to polls the drive for status from TapeAlert. The tape drives track how many read and write errors they have encountered within a certain time period, and the drive sets a CLEAN_NOW or CLEAN_PERIODIC flag when a threshold is reached. Using the mentioned solution for drive cleaning no need any intervention from backup operators and will be done automac al ly W congur e to have 5 cl e n tape car tri d . e ani g ges inside the tape library, so the robot can mount them for cleaning whenever needed.

After the implementation of NetBackup, there are several routine jobs that need to be done by system administrator to guarantee that backup system runs normally and all operations continue as planned. IBM and NetBackup documentation provide full documentation about the operation and troubleshooting of their systems that can be used by system administrator to plan for their daily operation jobs. In this section we just point to some major activities that need to be done by system administrator and backup operators as their daily routine operations. NetBackup catalogs are the internal databases that contain information about NetBackup backups and configuration. Backup information includes records of the files that have been backed up and the media on which the files are stored. The catalogs also contain information about the media and the storage devices. Since NetBackup needs the catalog information so that it can restore client backups, the catalog is

Page 95 of 99

96

99

the most important thing to be backed up regularly. If the catalog is lost because of disk failure or in case any disaster happens, NetBackup cant recover the systems it has backed up before. As mentioned above the catalog plays an integral part in a NetBackup environment, therefore a special type of backup protects the catalog. A catalog backup backs up catalog-specific data as well as produces disaster recovery information. A catalog backup is configured separately from regular client backups by using the Catalog Backup Wizard. The catalog can be stored on a variety of media, but we configure the Catalog backup to use the tape volume, as tapes are more stable and the risk of data loss is much lower than disk. A dedicated volume pool will be defined to be used exclusively for Catalog Backups. NetBackup allows online, hot catalog backup, so the catalog backup can be performed while regular backup activity occurs. The catalog backup will be scheduled to have full backup every week on Saturdays between 09:00 to 16:00 and then the incremental backup on daily basis, every day between 09:00 to 16:00. As most of the backup jobs happen on the weekends and nights the menoned m es can be the most off peak hours for the NetBackup. The catalog backups will be saved on dedicated on-site tape media which assigned to Catalog Backup Pool. We no need copy the catalog backup to off-site tape pool, as the vault process will make separate copy of catalog backup and send it to offsite facility. The E-mail address for NetBackup administrator of MCI Tohid Datacenter must be specified, so the disaster recovery information can be sent to him after every catalog backup. System administrator should check relevant logs and reports to make sure that the catalog is backed up successfully and this is one of the most important routine jobs to be done by NetBackup administrator with utmost care and attention. The E-mail message mentioned above also determines the success of catalog backup. The size of the NetBackup catalog depends on the number of files in the backups and the number of copies of the backups that are retained. As a result, the catalog has the potential to grow quite large. Especially in MCI as we are backing up huge number of files (CDRs) regularly

Page 96 of 99

97

99

Large amounts of catalog data can pose different problems: it will increase the time needed for catalog backup and probably the backup cant be finished within the specified time window, also it lowers the overall performance of NetBackup. NetBackup provides an archiving feature that can be used to move older catalog data to other disk or tape storage. Archiving can reduce the size of the catalog on disk and thus reduce the backup time. Symantec recommends keeping the size of catalog below 750 GB. So the system administrator should monitor the size of catalog and plan for catalog archiving when it reaches to the mentioned size. The catalog archiving operations should be done when NetBackup is in an inactive state (no jobs are running). In MCI Tohid Datacenter the online poron o cat al o resi d o C -960 and f g es n X4 the archived part can be stored on CX4-240. For healthy operation of NetBackup, there are varieties of tools available inside NetBackup which can help the administrators to monitor the operation of the system, generate the reports and audit the changes. System administrator and backup operators of MCI need to work out with different reports and monitoring options which suits their daily operation to make sure all backups are successful and there is no potential risk that may cause future failure. NetBackup provides three major tools to control the operation of the system: 1. Activity Monitor 2. Audit Manager 3. Reports Utility Activity Monitor is used to monitor the current status of the system. It can be used to monitor and control NetBackup jobs, services, processes, and drives. Activity Monitor includes different tabs that each of them is used to monitor particular actions. The Jobs tab in the Activity Monitor displays all of the jobs that are in process or that have completed for the master server currently selected. The Services tab in the Activity Monitor displays the status of NetBackup services on the master server and all media servers that the

Page 97 of 99

98

99

selected master server uses. The Processes tab displays the NetBackup processes that run on the master server. Another important part of Activity Monitor is the Device Monitor. If requests await action or if NetBackup acts on a request, the Pending Requests pane appears. For example, if a tape mount requires a specific volume, the request appears in the Pending Requests pane. System administrator should always monitor the Pending Requests and perform related actions to resolve or deny pending requests and actions. Auditing Manager can be used to investigate unexpected changes in a NetBackup environment. An audit trail is a record of user-initiated actions in a NetBackup environment. Essentially, auditing gathers the information to help answer who changed what and when they changed it. For example, it might be found that the addition of a client or a backup path has caused a significant increase in backup times. The audit report can indicate that an adjustment to a schedule or to a storage unit configuration might be necessary to accommodate the policy change. Reports utility will be used to generate reports to verify, manage, and troubleshoot NetBackup operations. NetBackup reports display information according to job status, client backups, and media contents. Among different report types available by Reports Utility, some of them must be used regularly and strictly to make sure there is no permanent error and there is no potential risk. We suggest regular reports as following: 1. Reports on Daily Basis: Status of Backups report and Problems report are two major reports that need to be generated and investigated every day by system administrator. The Status of Backups report shows status and error information about the jobs that completed within the specified time period. If an error occurred, a short explanation of the error is included in the report.

Page 98 of 99

99

99

The Problems report generates a list of the problems that the server has logged during the specified time period. Using the above two reports we can make sure that all the scheduled backups during the previous day are completed without any error. If any error appears in the report, the Troubleshooter can be used to analyze the cause of the errors. 2. Reports on Weekly Basis: The status of storage units must be verified on a weekly basis to make sure well have enough disk space and tape media for NetBackup to store the backup images. The Disk Storage Unit Status report displays the state of the disk storage units. (For example, the total capacity and the used capacity of the disk storage unit.) The Tape Summary report summarizes active and nonactive volumes for the specified media owner according to expiration date. It also shows how many volumes are at each retention level.

Page 99 of 99

Вам также может понравиться