Вы находитесь на странице: 1из 24

Inside the HP Cloud Map for Oracle RAC and Fusion Middleware on VMware

A VMware and HP 3PAR Reference Architecture for Oracle


Technical white paper

Table of contents
Executive summary............................................................................................................................... 2 Solution environment ............................................................................................................................ 3 Overview ............................................................................................................................................ 4 Cloud Map prerequisites and caveats .................................................................................................... 4 Networks ............................................................................................................................................ 5 VMware setup ..................................................................................................................................... 6 Software ........................................................................................................................................... 15 HP IO template .................................................................................................................................. 16 HP Operations Orchestration workflows: RAC ....................................................................................... 18 HP Operations Orchestration workflow: SetupOFM ............................................................................... 22 Summary .......................................................................................................................................... 23 For more information .......................................................................................................................... 24

Executive summary
For IT teams, infrastructure provisioning can be both time-consuming and resource-draining. Each time a business unit, application owner, or development team requests resources, a lengthy process begins. IT experts have to capture system requirements, design the solution from scratch, and then identify the resources that are currently available and those that need to be procured. HP Matrix Operating Environment (Matrix OE) infrastructure orchestration enables your IT organization to provision infrastructure consistently and automatically from pools of shared resources using a selfservice portal. You can rapidly provision resources ranging from a single virtual machine to complex multi-tier environments with physical and virtual servers and storage systems. Oracle Real Application Clusters or better known as Oracle RAC is Oracles popular cluster database technology utilizing a shared cache architecture. Oracle Fusion Middleware (OFM) is Oracles complete family of application infrastructure products that uses WebLogic, Oracles Java application server. The OFM product suite ranges from Service Oriented Architecture (SOA) to enterprise portals, and is integrated with Oracle Applications and technologies to speed implementation and lower the cost of management and change. Normally, a customer must get the systems allocated and then do a host of manual steps. These include installing the operating system, Oracle RAC, the WebLogic Server, the OFM software, and configuring the OFM domain for the admin server and each application server in the WebLogic cluster. As a support pillar to the HP Converged Infrastructure paradigm, all of this can be done automatically through HP Matrix OE infrastructure orchestration. This paper showcases Service Oriented Architecture (SOA) Suite as a representative example for automating deployment of any OFM product on an HP CloudSystem Matrix. HP Cloud Maps were developed to accelerate the creation of a service catalog by providing a guide to automate infrastructure and application provisioning and deployment. The HP Cloud Map for Oracle RAC and Fusion Middleware on VMware virtual machines includes a template and associated workflows, which provision RAC database servers and an OFM SOA Suite instance quickly and automatically in a repeatable fashion. HP CloudSystem Matrix can provision and decommission setups in a fraction of the time that would be required by hand. In addition, a customer can pre-load software and configure system files on the VMware template for RAC and OFM eliminating repetitive manual work and saving time. The focus of this white paper is on the creation of the VMware and Linux virtual machine setup for the HP Cloud Map template; the details on the workflows which complete the installation and configuration of the servers; and the scripts which the workflows leverage. The Cloud Map can be found at http://www.hp.com/go/cloudmaps/oracle, and instructions for importing the template are found in HP Cloud Map for Oracle RAC Fusion Middleware on VMware: Importing the template. This Cloud Map uses a specific environment to demonstrate the power of Matrix OE in a VMware and Linux environment. It can be used as a robust example to be customized to fit your specific needs. To look at other combinations or to piece together other existing Cloud Maps to fit your needs, see http://www.hp.com/go/cloudmaps/oracle. Target audience: This document is for experienced IT database and system administrators and users who wish to learn more about the capabilities of HP Matrix OE and how it can be used to provision Oracle RAC and Oracle Fusion Middleware (OFM) or similar applications on VMware virtual machines using HP ProLiant servers. Knowledge of the HP CloudSystem Matrix and the underlying components will be helpful when reading this white paper. You should also have a basic understanding of VMware, Oracle RAC, and OFM. Please see the For more information section at the end of this paper for links to additional information on these topic areas. This white paper describes validations performed in July 2011.

Note HP Matrix Operating Environment was previously referred to as HP Insight Dynamics. HP Matrix OE infrastructure orchestration was previously referred to as HP Insight Orchestration (IO). The HP Matrix Operating Environment uses a subset of HP Operations Orchestration capability. HP Cloud Maps leverage workflows that are authored using this subset of Operations Orchestration.

Solution environment
The instructions in this document assume you have already set up your CloudSystem Matrix, HP Virtual Connect (VC) infrastructure, and Central Management Server (CMS). You will need to specify the network connections required for Oracle RAC and OFM SOA Suite in the template definition. This template was created using HP Insight Dynamics 6.2 Update 1. The server where Insight Dynamics is installed is known as CMS. To download HP BladeSystem firmware, go to http://www.hp.com/go/matrixcompatibility. SAN storage is required for this Cloud Map. The testing was done first on an HP 8400 Enterprise Virtual Array (EVA8400) and then on an HP 3PAR T800 Storage System. The firmware and software versions used during template validation are listed in Table 1.
Table 1. Firmware and software levels Component HP Insight Dynamics HP Onboard Administrator HP Virtual Connect Manager Servers Hypervisor Manager Hypervisor Virtual Machine Operating System Database Middleware HP Integrated Lights-Out QLogic QMH2462 4Gb Fibre Channel Adapter Version 6.2 Update 1 3.21 3.17 HP ProLiant BL685c G5 VMware vCenter 4.1.0 VMware ESXi 4.1 Update 1 Red Hat Linux 5.5 Oracle 11gR2 (11.2.0.2) RAC Oracle Fusion Middleware SOA (including WebLogic Server) 11g 2.01 (iLO 2) Driver: 1.26; Firmware: 4.00.90

Overview
This white paper describes the process of setting up a VMware vCenter server and a VMware ESXi server for installing Oracle RAC and OFM on Linux virtual machines. After that, the document describes the workflows required to install Oracle RAC and OFM. This is a very detailed technical document that goes beyond the standard setup of infrastructure orchestration on the Central Management Server (CMS). It describes the first time discovery of infrastructure components, the process of placing the VMware ESXi server in a server pool, and the basics of how to connect the VMware servers to the CMS server. Please examine the For more information section at the end of this paper for a number of reference manuals and URLs that will more thoroughly explain those tasks. This paper will discuss the following:
1. Cloud Map prerequisites and caveats. Pre-installation requirements and what the Cloud Map can

and cannot do.


2. Networks. Required networks, network names, and DNS server capability. 3. VMware setup. While VMware manuals are the key to using VMware products, this very detailed

section discusses which VMware products to install, obtaining licenses for them and all Matrix COE components, presenting 3PAR / Enterprise Virtual Array (EVA) storage to the VMware hypervisor, setting up networks in Virtual Connect Enterprise Manager (VCEM) and VMware vSphere client, creating VMware templates for RAC and OFM, copying down RAC software and OFM cloned image and OFM packed domain to the VMware templates, and connecting VMware vCenter and ESXi servers to the CMS server.
4. Software. Verifying that RAC and OFM templates are available. 5. HP IO template. Overview of the RAC and OFM template setup. 6. HP Operations Orchestration workflows: RAC. A detailed overview of the RAC workflow looking

at the high level workflow and its sub-workflows to get a feel of what is required to install RAC.
7. HP Operations Orchestration workflows: SetupOFM. A high level overview on what it takes to

install OFM and to unpack its domain.


8. Successful deployment. Shows how using this Cloud Map in CloudSystem Matrix, RAC and OFM,

installs in a mere fraction of the time of manual deployment, and doing so in an automated fashion.

Cloud Map prerequisites and caveats


The following assumptions were made in the creation of this Cloud Map: Oracle RAC database server configuration: Oracle 11gR2 RAC binaries. Place them on the VMware RAC template as discussed in the VMware setup section. Schema and data for the RAC database server. This will be supplied by the customer as the Cloud Map doesnt supply a vanilla schema or vanilla data. Oracle Fusion Middleware configuration: File OFM_SOA_Golden_Linux_image.jar: Install WebLogic Server and all other OFM applications of interest (which may or may not include OFM SOA Suite) on another Linux server and use Oracles cloning tool to build this file, which will be at least 2GB large. See the For more information section at the end of this paper on how to perform Oracle cloning. Place OFM_SOA_Golden_Linux_image.jar on the VMware OFM template as discussed in the VMware setup section.

This Cloud Map uses the Oracle JVM (Java Virtual Machine) that comes with Red Hat 5.5. Modify the OFM <domain>/bin/setDomainEnv.sh file and find SUN_JAVA_HOME= and replace it with: export JAVA_VENDOR=Sun export SUN_JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64 File soadomain.jar: On the same separate Linux server where the golden image is made, create an OFM domain and use Oracles packing tool to make a flat file image called soadomain.jar. In OFM installations, the packing/unpacking domain tool is in common/bin and is called pack.sh or unpack.sh. (Note: Files packSOAdomain.sh and unpackSOAdomain.sh are also included in installSOA.tar in the Cloud Map zip.) While there is a default soadomain.jar in installSOA.tar, you should make your own soadomain.jar and replace the one in installSOA.tar by doing: tar rvf installSOA.tar soadomain.jar. Then place the updated installSOA.tar in c:\tmp on the CMS. Servers will not come up automatically in WebLogic Production mode. In development mode, passwords can be stored in a file to bring up the servers automatically. The admin console can be used to do a mass startup of all the SOA managed servers. When deploying multiple instances of the template at the same time, there are a few items to be aware of: The RAC Cluster name must be unique. The following line in the file c:\tmp\grid_rac.rsp on the CMS must be modified for each concurrent RAC instance: oracle.install.crs.config.clusterName=raccluster For this version of the Cloud Map, multiple requests to deploy the same HP IO template cannot occur simultaneously because the deployment uses multiple intermediary files in c:\tmp on the CMS server with fixed names.

Networks
In this setup, three networks are used as shown in Figure 1. The names of the networks are the same for the physical networks seen by the Virtual Connect modules and seen by VMware ESXi server. Only virtual networks can be used for VMware templates. Here are descriptions of the three networks:
1. RDP_Deployment_Network. Network to install Linux. The primary network used between vCenter

and ESXi servers to install Linux virtual machines. It can also be used as a backup to the RAC_OFM_Network as RAC may have another network for high availability functionality.
2. RAC_OFM_Network. RAC and OFM private network between the four systems for this application

to communicate, which is also the RAC interconnect network. While the public external network could also be used, this private network is preferred for high traffic and because its DNS server must be updatable by deployed RAC servers.
3. Public-External. Public external network between all four systems for access to other systems in the

company and for external web browsing.

Figure 1. HP IO Network configuration

VMware setup
Obtaining and installing VMware vCenter and ESXi software are necessary to setup virtual machines. To better understand VMware software and how to install them, see the For more information section at the end of this paper for links to VMware. Pre-installation requirements:
1. Obtain at least two available ProLiant blades in the same enclosure as the CMS server. One blade

is for vCenter and the other blades are for ESXi server(s).
2. Obtain Microsoft Windows Server 2008, VMware vCenter (which also contains vSphere client

software), VMware ESXi server, and all appropriate licenses.


3. Obtain all necessary licenses for Matrix OE and apply them. One license that is easy to forget is

the HP iLO Advanced for BladeSystem that is required for iLO for ESXi server (see Figure 2).
4. For SAN storage

a. Present at least 500GB to the blade that will contain the ESXi server. This will give enough space for the ESXi server software and the storage required for the Cloud Map. b. On the CMS server, go to Tools Virtualization Manager Tools (2nd instance) Virtual Connect Enterprise Manager (VCEM) VCEM homepage, define a profile for the ESXi server as shown in Figure 3. As circled in red, make sure it will have access to the three networks named. As circled in green, have the storage (in this case a 3PAR storage array) presented to the ESXi server. As circled in blue, make sure that it can boot off of the 3PAR LUN (LUN 0 in this example). Figure 4 shows the vSphere client when the disk drive is an HP Enterprise Virtual Array (EVA), notice the area circled in red. Figure 5 shows using an HP 3PAR storage array instead, as circled in red.
5. This Cloud Map requires that the RAC_OFM_Network have a DNS server that can be updatable

by the Oracle RAC servers using the Linux nsupdate command. As part of the RAC installation, nsupdate is called several times.

Figure 2. License Manager. Obtaining needed licenses before Cloud Map deployment.

Figure 3. Setting up the Virtual Connect Enterprise Manager profile for the ESXi server.

Figure 4. vSphere client after an HP Enterprise Virtual Array (EVA) is set up as the datastore.

Figure 5. vSphere client after an HP 3PAR Storage Array is set up as the datastore.

To perform the VMware software installation, the following actions need to be performed:
1. Install Windows Server 2008 on an HP ProLiant blade. 2. Install VMware vCenter on that Windows 2008 blade. Installing vCenter on the CMS server is not

supported and will not work because vCenter and CMS use the same port numbers.

3. Bring up the iLO of the server blade, where ESXi is to be installed.

a. Install VMware ESXi on the HP ProLiant blade. ESXi doesnt require an operating system so it installs directly onto an empty ProLiant server. As part of the installation, the LUN(s) presented as part of the pre-installation tasks will be possible data stores. b. Logon to the ESXi server; there is no password for root. c. Choose Configure Password to set the root password for the ESXi server. d. Choose Test Management Network to make sure network connections are working. e. Choose Troubleshooting Mode Options and then Enable Remote Tech Support (SSH).
4. Install vSphere client on the CMS server. 5. On the vSphere client:

a. Logon using the name/IP address of the vCenter server and its Administrator password. b. Add the ESXi server as a host to the VMware data center. c. As shown in Figure 6, click on the ESXi server (10.0.0.103 in this example), click the Configuration tab and choose Networking. All of these are circled in red. Then as circled in green, click on Add Networking and Properties to add networks with the names circled in blue. Add the network names in the order shown.

Figure 6. In vSphere Client, have all needed networks for ESXi server (10.0.0.3)

d. Create a virtual machine named OFM_VM_1 with at least 1GB memory, 20GB disk space, and access to the 3 networks circled in blue in Figure 6. The virtual machine must not be created using Thin provisioning for the disk. Have all of the 20GB disk space allocated to start with.

10

e. Install Red Hat Linux onto the server. This was done by opening up the console for OFM_VM_1 and connecting the Red Hat Linux ISO on the CMS server to this console and booting OFM_VM_1. f. For the Red Hat Linux installation, you need to install whatever is required for Oracle RAC and Oracle Fusion Middleware as documented by Oracle. Steps 5f-h describe what was installed to test the Cloud Map, which may be different from what you install. For this test, the Red Hat Linux 5.5 installation was done manually. As options, Software Development and Web Server were chosen as top level customization options. Then in the Development section, Java Development and Legacy Software Development were chosen. In the Servers section, FTP Server was chosen. In the Base System section, System Tools was chosen. It may be possible to use the optional kickstarten_us.cfg file provided in the Cloud Map zip. g. After the virtual machine installation completes, open up a console to logon to the virtual machine. For this test with Red Hat 5.5, the following RPMs were then installed in order: rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm -i -i -i -i -i -i -i -i -i -i -i -i -i -i -i -i -i -i bind-9.3.6-4.P1.el5_4.2.x86_64.rpm bind-chroot-9.3.6-4.P1.el5_4.2.x86_64.rpm java-1.6.0-openjdk-1.6.0.0-1.7.b09.el5.x86_64.rpm java-1.6.0-openjdk-devel-1.6.0.0-1.7.b09.el5.x86_64.rpm libaio-devel-0.3.106-5.x86_64.rpm libnl-1.0-0.10.pre5.5.x86_64.rpm libsysfs-2.0.0-6.x86_64.rpm libxslt-python-1.1.17-2.el5_2.2.x86_64.rpm lm_sensors-2.10.7-9.el5.x86_64.rpm net-snmp-5.3.2.2-9.el5.x86_64.rpm rusers-0.17-47.x86_64.rpm rwho-0.17-26.x86_64.rpm sysstat-7.0.2-3.el5.x86_64.rpm perl-Convert-ASN1-0.20-1.1.noarch.rpm samba-3.0.33-3.28.el5.x86_64.rpm system-config-httpd-1.3.3.3-1.el5.noarch.rpm system-config-nfs-1.3.23-1.el5.noarch.rpm system-config-samba-1.2.41-5.el5.noarch.rpm

h. VMware tools need to be installed on the virtual machine so it can be managed by VMware. At the time this paper was written, VMware tools for Red Hat 5.5 could be downloaded from http://packages.vmware.com/tools/esx/4.1u1/rhel5/x86_64/index.html. The following RPMs were installed in this order: rpm -i vmware-open-vm-tools-kmod-8.3.7-381511.el5.x86_64.rpm rpm -i vmware-open-vm-tools-common-8.3.7-381511.el5.x86_64.rpm rpm -i vmware-open-vm-tools-nox-8.3.7-381511.el5.x86_64.rpm rpm -i vmware-tools-common-8.3.7-381511.el5.x86_64.rpm rpm -i vmware-tools-nox-8.3.7-381511.el5.x86_64.rpm i. Next create a file called /etc/resolv.conf.master for the /etc/resolv.conf that you would like the Oracle RAC and Oracle Fusion Middleware servers to use. This file will be copied to /etc/resolv.conf as part of the Cloud Map deployment process. The file should have at least the following first two lines: domain <domain of RAC_OFM_Network> nameserver <IP address of DNS server of RAC_OFM_Network>

11

j. k. l.

Copy the Oracle Fusion Middleware golden image created through Oracle cloning to /var/tmp/OFM_SOA_Golden_Linux_image.jar. Any other software or files that are desired for the Oracle Fusion Middleware system can be installed at this time (for example JRockit JVM). Shutdown OFM_VM_1.

m. For the virtual machine, Clone to template OFM_VM_1 to OFM_Template. VMware will clone OFM_Template to create future OFM virtual machines.
6. While still on the vSphere client, create the RAC master virtual machine.

a. Create a virtual machine named RAC_VM_1 with at least 2GB memory, 50GB disk space, and access to the 3 networks circled in blue in Figure 6. The virtual machine must not be created using Thin provisioning for the disk. Have all of the disk space allocated to start with. b. Perform all of the steps 5 d-h as before. (Note: you could clone OFM_VM_1 to RAC_VM_1 and use Edit Settings to increase the disk space to 50GB, but it will create a 20GB file system (original) and a 30GB file system (added). In order to not limit the size of the Oracle home directory and to avoid creating unnecessary size limits to any directory, a new virtual machine for RAC is created / installed.) c. Instead of placing the OFM golden image in /var/tmp, place the Oracle RAC software for Oracle RAC 11.2.0.2 for Linux and the ASM library files for Red Hat 5.5. At the time this paper was written, ASM library binaries could be downloaded from http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5084877.html. ASM library documentation is at http://www.oracle.com/technetwork/server-storage/linux/install-082632.html. At the time this paper was written, it is not clear whether Oracle will support ASM libraries for Red Hat Linux 6.0 and beyond. The following Oracle RAC files were copied to /var/tmp: p10098816_112020_Linux-x86-64_1of7.zip p10098816_112020_Linux-x86-64_2of7.zip p10098816_112020_Linux-x86-64_3of7.zip The following Oracle ASM libraries were copied to /var/tmp. The oracleasm libraries are installed using rpm i in the order stated below: oracleasm-support-2.1.3-1.el5.x86_64.rpm oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm d. Any other software or files that are desired for the Oracle RAC system can be installed at this time. e. Shutdown RAC_VM_1. f. For the virtual machine, Clone to template RAC_VM_1 to RAC_Template. VMware will clone RAC_Template to create future RAC virtual machines. As shown in Figure 7, the created virtual machines and templates from Steps 5 and 6 are circled in red.

12

Figure 7. vSphere Client shows that OFM_VM_1 and RAC_VM_1 virtual machines have been created. OFM_Template and RAC_Template will be used by HP IO to create more virtual machines.

Figure 8. HP IO Discovery screen of VMware vCenter server, ESXi server and, iLO for ESXi server.

13

7. Discover the vCenter and ESXi systems using the CMS server.

a. Logon to HP Insight Orchestration (HP IO) and do Options Discovery. You will need to input three IP addresses as shown in an example in Figure 8. The IP addresses are for: vCenter, ESXi, and iLO for ESXi server. It is important that you have the system credentials set up properly to logon to all of these three IP addresses. b. Connect the vCenter server to the CMS server. On HP IO, do Options VMware vCenter Settings and type in the IP address, Administrator login, and password. c. The ESXi server must then be setup as a managed system by Configure Configure or Repair Agents and then Configure Managed System Setup Wizard. Sometimes you need to do this step several times before the ESXi server successfully becomes a managed system. No other server needs to be managed, only the ESXi server.

d. Add the ESXi server into a server group for ESXi server by going into Insight Orchestration and clicking on the Servers tab. As shown in Figure 9, once the ESXi server is discovered, it will be placed in the red circled Unassigned server pool. Then create a server group for ESXi servers. As an example in Figure 9, the VC_ESXhosts server group is created and the blue circled ESXi server vega9103 is placed in the group. e. To better understand how to use HP IO software, see the For more information section at the end of this paper for links to HP Insight Dynamics and Matrix Operating Environment.

Figure 9. Adding ESXi server to its own server group in HP IO Servers tab.

14

Software
Verify that the OFM_Template and RAC_Template created in the VMware setup section can be accessed in the Insight Orchestration Software screen shown in Figure 10. VMware templates cannot be added on this screen, but must be created in VMware and are automatically populated here.

Figure 10. Insight Orchestration Software page with VMware templates.

15

HP IO template
The HP IO template is a representation of all of the major components managed by HP IO that need to be pulled together to make the two RAC servers and two OFM servers configuration. Figure 11 shows the HP IO template, Virtual_DBRAC_OFM, for four servers and the templates associated properties highlighted in red.

Figure 11. HP IO template screen for Oracle RAC and Oracle Fusion Middleware

16

Figure 12 is a closer look at the HP IO template. It shows two Linux virtual machines from the group known as DatabaseRAC, each server needs a virtual 53688 MB boot disk from a group known as RACBootDisk and a virtual 100 GB non-boot disk from a group known as ASM (that is for Oracle Automatic Storage Management database disks). In addition, for the two Linux virtual machines from the group known as OFMAppServer, each server needs a virtual 21476 MB boot disk from a group known as OFMBootDisk. (Note: In making the Cloud Map, slightly more than 50GB was requested for the RAC boot disk, but HP IO chose the size to be the number 53688 MB. Something similar happened to the OFM boot disk, when 20GB was requested.)

Figure 12. HP IO template for 2 RAC and 2 Oracle Fusion Middleware virtual machines.

17

HP Operations Orchestration workflows: RAC


CloudRAClinux is the main RAC workflow included with the Cloud Map. CloudRAClinux, shown in Figure 13, is called when a Virtual_DBRAC_OFM service is created.

Figure 13. CloudRAClinux workflow as seen from Operations Orchestration Studio on the Central Management Server (CMS)

CloudRAClinux workflow performs the following steps (match the items in the workflow above with the items in parentheses in the list below):
1. (Find Primary Hostname IPAddress) Parse the XML passed into the workflow. The workflow uses the

XSL files to extract server names and IP addresses. This is implemented by the FindPrimaryIPAddressByLogicalServerId.xsl file in the Cloud Map. This xsl file is in vRACOFMxsl.zip.
2. (List Iterator) Iterate through each of the systems to be installed. 3. (If system in Database RAC group) If this is not a RAC node (this is an OFM node), then skip this

node by going back to the List Iterator.


4. (Wait For System Availability) Wait for the RAC system to be ready to accept commands (that is,

installed and booted up).


5. (SFTP Put doRACprep) Transfer doRACprep.sh shell script to the newly installed Linux virtual

machine.

18

6. (SFTP Put configureSSH) Transfer configureSSH.sh shell script to the newly installed virtual

machine.
7. (Pre-install and FTP keys to CMS) This subflow is broken out in Figure 14. 8. (Register Servers) After the virtual machines are up, the servers must be registered. This is done

automatically by the workflow and is not needed to be done manually.


9. (Distribute Keys Passwordless) This subflow is broken out in Figure 15. 10. (Finalize Grid installation) This subflow is broken out in Figure 16.

Figure 14. Pre-install and FTP keys subflow under the main workflow CloudRAClinux.

As show in Figure 14, the Pre-install and FTP keys subflow performs the following steps (match the items in the workflow above with the items in parentheses in the list below):
1. (SSH run doRACprep) Run the shell script doRACprep.sh, which creates logins, initializes

database ASM RAC disk, updates DNS, and initializes the RSA keys for ssh and sftp.
2. (SFTP get Keys for grid, oracle, and root three items in subflow) Bring the initialized RSA keys

back to the CMS server for the grid, oracle, and root users. The whole process of getting, updating, and putting RSA keys in the above workflow and the workflows that follow is somewhat complicated because the keys for each system have to be on each and every system in order for each system to be able to logon to another system without needing a password, which is required for RAC. In addition, in order to logon to a system the first time to place the keys, logon passwords cannot be placed into Linux shell script files for security reasons. This method of putting keys on each system, overwriting them, appending to each key file the logon keys of the other systems, and then copying the key files to all of the systems may be overkill in some cases, but the process is automated.

19

Figure 15. Distribute Keys Passwordless subflow under the main workflow CloudRAClinux

As show in Figure 15, the Distribute Keys Passwordless subflow performs the following steps (match the items in the workflow above with the items in parentheses in the list below):
1. (Find Primary Hostname IPAddress) Parse the XML passed into the workflow. The workflow uses the

XSL files to extract server names and IP addresses. This is implemented by the FindPrimaryIPAddressByLogicalServerId.xsl file in the Cloud Map.
2. (List Iterator) Iterate through each of the systems to be installed. 3. (If system in Database RAC group) If this is not a RAC node (this is an OFM node), then skip this

node by going back to the List Iterator.


4. (SFTP Put keys for root, grid, and oracle) Put the RSA keys onto the RAC servers. 5. (SSH update grid, oracle, and root keys) Append the RSA keys from the current system to the key

files of each system.


6. (SFTP get new oracle, grid, and root keys) Bring RSA keys back to CMS server to distribute to next

server.
7. (Get RAC server list) By this step, the RAC server key files on the CMS server have all of the RSA

keys of each system. Now time to get a list of all of the RAC servers and copy the key files to each system one last time.
8. (List Iterator #2) Iterate through each of the RAC systems to be installed.

20

9. (If system in Database RAC group) If this is not a RAC node (this is an OFM node), then skip this

node by going back to the List Iterator.


10. (SFTP Put new keys for root, grid, and oracle) Put the RSA keys onto the RAC servers. 11. (SSH run configureSSH) Run the configureSSH.sh file, which activates RSA keys for root, grid, and

oracle users so they can logon without passwords to other RAC systems.

Figure 16. Finalize Grid installation subflow under the main RAC workflow CloudRAClinux.

As shown in Figure 16, the Finalize Grid installation subflow performs the following steps (match the items in the workflow above with the items in parentheses in the list below):
1. (Get RAC server list and Get NICs list) Get the list of only the RAC servers and all of their NICs

(that is networking cards) MAC addresses.


2. (List Iterator RAC Node list and List Iterator: NICs) Only get the first RAC system on the list to be

installed noting the systems NIC MAC address list. This list iterator is not looped through. Since RAC is installed on the disk shared between all RAC servers, RAC only needs to be installed on the first server.
3. (SFTP Put *.rsp files, shell scripts) Put the *.rsp response files for silent installation. Normally Oracle

software is installed interactively with runInstaller, but since Insight Dynamics is an automated installer, response files are needed to answer questions a user would normally answer. In addition, the files configureSCAN.sh, configureVIP.sh, and doRACLinuxinstall.sh are copied to each RAC server.

21

4. (Get VIP Address List) Get the VIP (that is, Virtual IP) addresses for each RAC system. This address

is allocated by Insight Dynamics.


5. (Get SCAN Address) SCAN means Single Client Access Name and is used for RAC. There is only

one SCAN IP address for the RAC setup and it is allocated by Insight Dynamics.
6. (Configure VIP) Run configureVIP.sh to put the VIP address in DNS. 7. (Install grid) Run the doRACLinuxinstall.sh file to install Oracle RAC. 8. (Send Success Status) Update success state in progress files.

HP Operations Orchestration workflow: SetupOFM


SetupOFM is the main workflow for the OFM portion included with the Cloud Map. SetupOFM, shown in Figure 17, is called when a Virtual_DBRAC-OFM service is created.

Figure 17. SetupOFM workflow as seen from Operations Orchestration Studio on the Central Management Server (CMS)

SetupOFM workflow performs the following steps (match the items in the workflow above with the items in parentheses in the list below):
1. (Get server group, IP addr, name of all systems) Parse the XML passed into the workflow. The

workflow uses the XSL files to extract server names and IP addresses. This is implemented by the FindPrimaryIPAddressByLogicalServerId.xsl file in the Cloud Map.
2. (Iterate through all systems) Iterate through each of the systems to be installed.

22

3. (If system in OFMAppServerGroup) Operate only on OFM servers. Otherwise if RAC servers, go

get another server on the list.


4. (Wait For Systems To Be Available) Wait for the Linux system to be ready to accept commands

(that is, installed and booted up).


5. (SFTP Put master shell script) Transfer master installation shell script doSOAinstall.sh. 6. (SFTP Put SOA tar ball = smaller install files) Transfer archive file installSOA.tar, which contains

other files and scripts used for OFM installation.


7. (Run master shell script) Execute shell script doSOAinstall.sh on new system to silently install

WebLogic Server, install OFM SOA Suite, and unpack the SOA domain.
8. (Register Servers) After the Linux servers are up, the servers must be registered. This is done

automatically by the workflow and is not needed to be done manually.

Summary
With the HP CloudSystem Matrix, you can simplify and streamline building out a flexible and dynamic Converged Infrastructure. HP CloudSystem builds on the Converged Infrastructure based on a shared services model, using pools of compute, storage, and network resources that is an ideal foundation for a cloud infrastructure. It is built on the modular HP BladeSystem architecture and includes the highly automated Matrix Operating Environment (Matrix OE) that enables rapidly provisioning complex infrastructure services and adjusting them to meet changing business demands. It also includes Cloud Service Automation for Matrix for basic application provisioning and compliance monitoring. HP Matrix Operating Environment infrastructure orchestration provides a mechanism for provisioning application services quickly and in a repeatable fashion using pre-defined templates. These templates describe the resource requirements of that application service. As part of the template, HP Operations Orchestration workflows can be attached at various points in the execution flow. Workflows can be utilized to perform additional post operating system installation and configuration tasks further automating the rollout and provisioning of the various services in your organization. This white paper describes the HP Cloud Map for Oracle RAC and Fusion Middleware on VMware infrastructure software on VMware virtual machines. The Cloud Map includes workflows and scripts that automate the installation and configuration of Oracle RAC and OFM SOA Suite instances. Upon completion of the deployment, the Oracle RAC database administrator should create the database and import the data. The OFM administrator may customize the SOA domain to fit the end-users needs. One of the benefits of virtualization is the ability to make multiple instances of the HP IO template without necessarily obtaining more physical systems. For more information on how to import the infrastructure orchestration template and workflow for Oracle RAC and Fusion Middleware SOA into your HP CloudSystem Matrix environment, please download the HP Cloud Map for Oracle RAC and Fusion Middleware on VMware: Importing the template, and additional files from the HP Cloud Maps download site at http://www.hp.com/go/cloudmaps/oracle.

23

For more information


HP CloudSystem Matrix HP Cloud Map download site HP Cloud Map download site for Oracle software HP Matrix Operating Environment (delivered through HP Insight Dynamics) HP Insight Dynamics (Matrix OE) infrastructure orchestration documentation HP BladeSystem VMware http://www.hp.com/go/bladesystemmatrix http://www.hp.com/go/cloudmaps http://www.hp.com/go/cloudmaps/oracle http://www.hp.com/go/matrixoe http://www.hp.com/go/matrixoperatingenvironm ent http://h18004.www1.hp.com/products/solutions /insightdynamics/info-library.html http://www.hp.com/go/bladesystem http://www.vmware.com/support http://www.hp.com/go/vmware Oracle 11g Database Oracle Fusion Middleware Oracle Cloning HP BladeSystem technical resources HP SAN Design Reference Guide best practices for SAN design HP 3PAR Storage Systems http://www.oracle.com/pls/db112/homepage http://www.oracle.com/us/products/middleware /index.html http://download.oracle.com/docs/cd/E15523_0 1/core.1111/e10105/clone.htm http://h71028.www7.hp.com/enterprise/cache/ 316682-0-0-0-121.html http://h20000.www2.hp.com/bc/docs/support/ SupportManual/c00403562/c00403562.pdf http://h10010.www1.hp.com/wwpc/us/en/sm/ WF05a/12169-304616-5044010-50440105044010-5044216.html

To help us improve our documents, please provide feedback at http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. 4AA3-6367ENW, Created August 2011

Вам также может понравиться