Академический Документы
Профессиональный Документы
Культура Документы
The installation kit is primarily used for setup and maintenance of the Installation
Manager.
3. Choose the mode of installation as Admin/ user/group and choose location for
installing.
4. Verify that the installation manager is installed properly.
5. Accessing product repositories: this procedure uses several Installation Manager sample
jobs to access the product repositories. These jobs can be found in the GIN.SGINJCL
data set that is installed as part of FMID HGIN140.
You can access the initial product repositories and optional service repositories. To display the
exact contents of the repositories, use the Installation Manager imcl
istAvailablePackages command.
Stage 2: Migration
Two ways to migrate:
1. Manual - manually collecting the files and then upgrading
2. Using the migrating toolkit
Migrating toolkit:
For a deployment manager with federated nodes, run the command only from the deployment
manager profile_root/bin directory and never from the federated node profile. For stand-alone
nodes, run the command from the node profile_root/bin directory.
Key activities:
Profiles: (Added newly )
The application servers are defined and configured within a profile.
There are two profiles: WAS Traditional and Liberty Profiles
WAS traditional profile is to be used in this upgrade.
The main unit for a migration is the profile, which is migrated in 3 basic steps:
1. Take a snapshot of the source profile from the old installation.
2. Create a compatible target profile in the new installation.
3. Merge the data from the snapshot into the target profile.
Migrating a cell, which contains the deployment manager and federated nodes, requires
special attention. Because the deployment manager controls the configuration in the cell, each
node must be synchronized with the new deployment manager as it is migrated.
WASMigrationAppInstaller command:
You can run the WASMigrationAppInstaller command as many times as necessary to install
any applications that were not installed by the WASPostUpgrade command.
Naming conventions : (Newly added)
WebSphere Application Server uses the following naming scheme:
V.R.M.F
where
V = version
R = release
M = modification
F = fix pack
For example, 9.0.0.1 refers to version 9, release 0, modification 0, and fix-pack 1. It is also
common to use "version" to prefix a particular release, modification, or fix pack"version
9.0" when referring to a release, for example, or "version 9.0.0.1" when referring to a fix
pack.
Target Datasets
High-level qualifier (HLQ)
High-level qualifier for the target z/OS datasets that contains the generated jobs and
instructions.
When a z/OS migration definition is uploaded to the target z/OS system, the migration
jobs and files are written to a pair of partitioned datasets. While is it possible to reuse
these datasets, it is safest to create separate datasets for each z/OS system that is to be
migrated.
HLQ.CNTL - a partitioned dataset with fixed block 80-byte records to contain
migration jobs
HLQ.DATA - a partitioned dataset with variable length data to contain other data that
is contained in the migration definition
Note: A multilevel high-level qualifier can be specified as the dataset high-level
qualifier.
1. Back up the deployment manager and all old nodes: In case of failure during the
migration, save the current deployment manager and node configuration to a file that
you can use later for recovery purposes by using the backupConfig command.
2. Install WebSphere Application Server Version 9.0 onto each target machine in a new
directory using Installation Manager
3. Create the target deployment manager profile by running
the manageprofiles command with the appropriate parameters.
The target deployment manager profile is a new deployment manager profile that will be the
target of the migration.
4. Save the current deployment manager configuration to the migration backup directory
by running the WASPreUpgrade command from the new deployment manager
profile bin directory.
5. Restore the previous deployment manager configuration that you saved in the
migration backup directory by running the WASPostUpgrade command.
6. Back up the Version 9.0 deployment manager configuration to a file by running
the backupConfigcommand on the Version 9.0 deployment manager. This is an
important step in the cell migration plan. If there are any node migration failures, you
can restore the cell configuration to the point before the failure, apply remedial
actions, and attempt the node migration again
7. Start the Version 9.0 deployment manager.
Note: To ensure that the previous version of the deployment manager is not running.
8. For Compute Grid or Feature Pack for Modern Batch, verify that the job scheduler
was migrated correctly and that you can dispatch jobs to the previous version servers
that host your batch applications.
9. Migrate application client installations. Migrate client resources to Version 9.0-level
resources.
10. Migrate nodes.
Use the migration tools to migrate the previous versions of the nodes in the
configuration to Version 9.0.
11. Migrate plug-ins for web servers.
The product supports several different web servers
Clone migration:
Clone migrations on WebSphere Application Server for z/OS are supported on fix pack
9.0.0.3 and higher
Cell Clone Migration Process:
1. Migrate dmgr.
2. Migrate node A.
3. Migrate node B.
4. Once cloned cells are managed independently.
5. V8.0 cell remains functional and running.
6. V9.0 keeps all the same names for: cell, nodes, clusters and servers.
7. V9.0 cell is started, tuned and tested.
8. Web Server switched from V8.0 to V9.0 cell when ready.
9. V8.0 can be stopped, but kept for recovery
Clone migrations follow the standard migration procedures, except that:
The -clone parameter is specified when you run the WASPostUpgrade command.
For clone migrations, the new profile configuration must use unique port numbers so that the
new and old configurations do not have port conflicts.
The deployment manager maintains the master configuration data for all of the nodes that it
manages. This configuration data is updated through the configuration manager. When the
configuration manager detects that the updates to the configuration data were not made
against the latest saved copy, it rejects the updates and creates an exception.
Migrate each node independently. For example, let the first node complete the
migration process before starting the process for the second node, and so on.
Ensure that the administrative console for the deployment manager is not running
when the migration process is in progress.
If you need to migrate federated nodes concurrently, use the following steps to minimize the
potential failures:
Rollback:
Generally, migration does not modify anything in the configuration of the prior release;
however, there are cases where minimal changes are made that are reversible.
When using clone migration, a roll back plan is not necessary. The previous environment
remains unchanged.
In a Network Deployment Cell, the old nodes are never synchronized to or managed by the
new deployment manager. To reset a clone migration, delete all the nodes in the target cell
and start over.
For WAS Network Deployment cell
You can use the restoreConfig and wsadmin commands to roll back a migrated WAS V9 cell
to the previous version. This returns the configuration to the state that it was in before
migration. After rolling back you can restart the migration process.
Procedure:
1. Back up your existing configuration.
Run the backupConfig command or your own preferred utility to back up the source
deployment manager configuration.
Run the backupConfig command or your own preferred utility to back up the source
federated node configurations.
2. Stop all of the servers and node agents that are currently running in the Version 9.0
environment
3. If you chose to disable the previous deployment manager when you migrated to the
Version 9.0 deployment manager then run the restoreConfig command
4. Synchronize the federated nodes if they were ever running when the Version 9.0
deployment manager was running.
5. If you chose to keep the installed applications in the same location as the prior release
during migration to Version 9.0 and any of the Version 9.0 applications are not
compatible with the prior release, install applications that are compatible.
6. Delete the Version 9.0 profiles.
7. Start the rolled-back deployment manager and its federated nodes in the Version 7.0
or later environment.
A similar procedure can also be used in case of a rollback for a federated node.