Академический Документы
Профессиональный Документы
Культура Документы
Version 8.4
Contents
Introduction
TWS Introduction
TWS Architecture
TWS Network
TWS Workstation Types
TWS Requirements
TWS Installation
TWS Configuration
TWS User Interfaces
TWS Workstation Processes
TWS Workstation Interprocess Communication
TWS Network Communication
TWS Scheduling Objects Definition
TWS Final Job Stream
TWS Job Scheduling through CLI
TWS Job Scheduling through JSC
TWS Troubleshooting
Introduction
SCHEDULER:
The arrangement of a number of related operations in time. There are
workload management tools to automate the tasks.
Those tools automate the scheduling and allocation of hundreds or
thousands of interactive and batch jobs among the various computers on the
network. This scheduling and allocation may be based on such criteria as time
deadlines, the completion of other jobs, the needs of particular applications.
Workload management tools must also monitor job completion status and allow
systems administrators to establish job or application priorities in order to optimize
network performance. IBM Tivoli Workload Scheduler is one of Workload
Management tool to automate the jobs.
Before the start of each new day, the master domain manager creates a plan
for the next 24 hours. This plan is placed in a production control file, named
Symphony. Tivoli Workload Scheduler is then restarted in the network, and the
master domain manager sends a copy of the Symphony file to each of its
automatically linked agents and subordinate domain managers. The domain
managers, in turn, send copies of the Symphony file to their automatically linked
agents and subordinate domain managers.
Once the network is started, scheduling messages like job starts and
completion are passed from the agents to their domain managers, through the
parent domain managers to the master domain manager. The master domain
manager then broadcasts the messages throughout the hierarchical tree to update
the Symphony files of domain managers and fault tolerant agents running in full
status mode.
recovery. The copy of the plan on the backup master is updated with the
same reporting and logging as the master domain manager plan.
Domain manager
All communications to and from the agents in a domain are routed through
the domain manager. The domain manager can resolve dependencies between jobs
in its subordinate agents. The copy of the plan on the domain manager is updated
with reporting and logging from the subordinate agents.
Backup domain manager
A fault tolerant agent capable of assuming the responsibilities of its domain
manager. The copy of the plan on the backup domain manager is updated with the
same reporting and logging information as the domain manager plan.
Fault tolerant agent (FTA)
A workstation capable of resolving local dependencies and launching its
jobs in the absence of a domain manager. It has a local copy of the plan generated
in the master domain manager. It is also called workstation tolerant agents.
Standard agent
A workstation that launches jobs only under the direction of its domain
manager.
Extended agent
A logical workstation definition that enables you to launch and control jobs
on other systems and applications, such as Peoplesoft, Oracle Applications,
SAP, and MVS, JES2, and JES3.
Network agent
A logical workstation definition for creating dependencies between jobs
and job streams in separate Tivoli Workload Scheduler networks.
Job Scheduling Console client
Any workstation running the graphical user interface from which
schedulers and operators can manage Tivoli Workload Scheduler plan and
database objects. Actually this is not a workstation in the Tivoli Workload
Scheduler network; the Job Scheduling Console client is where you work with the
Tivoli Workload Scheduler database and plan.
TWS Requirements
Hardware Requirements:
The following lists the hardware requirements for Tivoli Workload
Scheduler:
Engine
The engine may be
Master Domain Manager
Backup Master Domain Manager
Fault Tolerant Agent
Connector for distributed engine
Command line client
Disk Space Requirements:
Operating System
IBM AIX
HP-UX
Linux
Solaris
FTA
CONN
210
275
180
210
330
280
350
390
Temporary Storage:
Temporary file space is needed during the installation of Tivoli Workload
Scheduler
Operating System
MDM/BKM
FTA
UNIX
170
40
Microsoft Windows
70
20
Memory Requirements:
Recommended and Minimum memory requirements are given in the following
table
Memory
MDM/BKM
FTA
Recommended
2048
256
Required
1024
256
JSC
100
250
110
120
100
JSC
65
210
75
90
60
Where
MDM Master Domain Manager
BKM Backup Master Domain Manager
FTA Fault Tolerant Agent
CONN Connector
JSC Job Scheduling Console
TWS Installation
Tivoli Workload Scheduler - V 8.4
DB2 Database
- V 9.1
Job Scheduling Console
- V 8.4
Before starting installation of TWS 8.4 in Linux, add soft link for a library
file libstdc++-libc6.1-2.so.3 with the source file libstdc++-3-libc6.2-2-2.10.0.so as
ln s libstdc++-libc6.1-2.so.3 libstdc++-3-libc6.2-2-2.10.0.so
DB2 database can be installed while installing TWS MDM. It can also be installed
separately.
Tivoli Workload Scheduler Installation
Step-by-step Installation:
Untar the TWS source file & run setup.sh
10
Choose Master Domain Manager while installing MDM, if it is FTA choose Agent
or domain manager.
11
On UNIX system, user name must exist. So create TWSUser and proceed
installation.
If it is MDM, leave Master domain manager name, if it is FTA type the MDM
name
12
The Database selection window will be displayed, Select DB2 Universal Database
13
14
The DB2 installation directory will be displayed. Insert the preferred path.
The Summary window will be displayed. This window contains all the information
that provided in previous steps.
15
The DB2 install script is needed to complete the installation. Select the path where
the script is located.
16
The Installation completed window will be displayed. Click Finish to end the
InstallShield Wizard.
The installation log file will be in tmp directory as shown in the screenshot.
17
Fix Pack
After installation and configuration, we schedule jobs in TWS. At that time, we
may face some errors while scheduling (for ex: error AWSJPL506E). This errors
are fixed by installing the fixpack 8.4.0-TIV-TWS-FP0001 in TWS V 8.4.
So download the fixpack and install the same after installing TWS 8.4.
18
TWS Configuration
CONFIGURING MASTER DOMAIN MANAGER WORKSTATION:
After Tivoli Workload Scheduler master domain manager has been
installed, it should be configured and able to produce a new production plan on
daily basis.
The production plan is handled and extended automatically by jobs in a job
stream named FINAL. When the FINAL job stream has been added to the
database and JnextPlan run once, the FINAL job stream will be placed in the
production plan every day and will run the jobs required to generate new plan.
Sfinal file will be created automatically when we install TWS. It will be in
TWS home directory on the server where TWS master domain manager has been
installed.
The following steps describe how to add FINAL job stream to the database
and run JnextPlan command manually for the first time.
1. Login as TWS user.
2. Set the system variables. Run tws_env.sh,. Set PATH & TWS_home.
3. Run the composer command.
4. Add the final job stream definition to the database by running the
following command:
add Sfinal where Sfinal is the name of the final job stream file.
5. Exit the composer command line and run JnextPlan job.
JnextPlan for 0000. This extends the production plan by 0 hours
and 0 minutes.
6. When JnextPlan completes, check the status of Tivoli Workload
Scheduler: conman status. If Tivoli Workload Scheduler started
correctly the status will be Batchman=LIVES.
7. Raise the limit to allow jobs to run. The default job limit after
installation is 0, so no job runs. Raise the job limit to allow jobs to run.
conman lc hostname; job limit
19
CONFIGURING AN AGENT
1. Login to the master domain manager as TWSuser.
2. Set the system variables. Run tws_env.sh & add PATH for TWS home
directory & bin directory in TWS home, export TWS home directory as
TWS_home
3. Create FTA workstation definition in TWS database.
Type composer new to open a text editor.
Type the workstation definition in the text editor.
CPUNAME TEST.SERVER
DESCRIPTION "fault tolerant agent"
OS UNIX
NODE TCS.TEST TCPADDR 31111
DOMAIN MASTERDM
FOR MAESTRO
TYPE FTA
AUTOLINK ON
BEHINDFIREWALL OFF
FULLSTATUS OFF
END
If it is windows OS, type OS WNT
20
User Definition:
Users needed to be defined in database prior to the scheduling of a job only
for windows workstations.
USERNAME CPUNAME#USERNAME
PASSWORD "**********"
21
22
These actions can also be carried through Job Scheduling Console (JSC)
23
Netman
Monman
Writer
Mailman
Batchman
Jobman
Netman
Netman is the Network Management process. It is started by the Startup
command and it behaves like a network listener program which receives start,
stop, link, or unlink requests from the network. Netman examines each request
received and spawns a local Tivoli Workload Scheduler process.
Monman
Monman is a process started by netman and used in event management.
Starts monitoring and ssmagent services that have the task of detecting the events
defined in the event rules deployed and activated on the specific workstation.
When these services catch any such events, after a preliminary filtering action,
they send them to the event processing server that runs usually in the master
domain manager. If no event rule configurations are downloaded to the
workstation, the monitoring services stay idle.
The communication process between the monitoring agents and the event
processing server is independent of the Tivoli Workload Scheduler network
topology. It is based directly on the EIF port number of the event processor and the
event information flows directly from the monitoring agents without passing
through intermediate domain managers. A degree of fault-tolerance is guaranteed
by local cache memories that temporarily store the event occurrences on the agents
in case communication with the event processor is down.
Writer
Writer is a process started by netman to pass incoming messages to the
local mailman process. The writer processes (there might be more than one on a
domain manager workstation) are started by link requests and are stopped by
unlink requests or when the communicating mailman ends.
24
Mailman
Mailman is the Mail Management process. It routes messages to either
local or remote workstations. On a domain manager, additional mailman
processes can be created to divide the load on mailman due to the initialization of
agents and to improve the timeliness of messages. When the domain manager
starts up, it creates a separate mailman process instance for each ServerID
specified in the workstation definitions of the fault-tolerant agents and standard
agents it manages. Each workstation is contacted by its own ServerID on the
domain manager.
Batchman
Batchman is the Production Control process. It interacts directly with the copy
of the Symphony file distributed to the workstations at the beginning of the
production period and updates it. Batchman performs several functions:
Jobman
Jobman is the Job Management process. It launches jobs under the
direction of batchman and reports job status back to mailman. It is responsible
for tracking job state and for setting the environment as defined in scripts
jobmanrc and .jobmanrc when requesting to launch jobs. When the jobman
process receives a launch job message from batchman, it spawns a job monitor
process. The maximum number of job monitor processes that can be spawned on a
workstation is set using the limit cpu command from the conman command line
prompt.
25
Mailbox.msg
This message file is read by the mailman process. It receives messages
from the local batchman and jobman processes, from both the Job Scheduling
Console and the console manager conman, and from other Tivoli Workload
Scheduler workstations in the network.
Intercom.msg
This message file is read by the batchman process and contains
instructions sent by the local mailman process.
Courier.msg
This message file is written by the batchman process and read by the
jobman process.
These message files are present in TWS home directory & be sure that the
size should not cross beyond 48 K.
26
6. When the current Symphony file is in place on the FTA, the mailman process
on the domain manager sends a start command to the FTA.
7. The netman process on the FTA starts the local mailman process. At this point
a one-way communication link is established from the domain manager to the
FTA.
8. The mailman process on the FTA reads the host name, TCP/IP address, and
port number of the domain manager from the Symphony file and use them to
establish the uplink back to the netman process on the domain manager.
9. The netman process on the domain manager determines that the request is
coming from the mailman process on the FTA, and spawns a new writer process
to handle the incoming connection. The mailman process on the FTA is now
connected to the writer on the domain manager and a full two-way
communication link has been established. As a result of this, the writer process on
the domain manager writes messages received from the FTA to the Mailbox.msg
file on the domain manager, and the writer process on the FTA writes messages
from the domain manager to the Mailbox.msg file on the FTA.
During Production Period
During the production period, the Symphony file present on the FTA is
read and updated with the state change information about jobs that are run locally
by the Tivoli Workload Scheduler workstation processes.
These are the steps that are performed locally on the FTA to read and update the
Symphony file and to process jobs:
1. The batchman process reads a record in the Symphony file that says that job1 is
to be launched on the workstation.
2. The batchman process writes in the Courier.msg file that job1 has to start.
3. The jobman process reads this information in the Courier.msg file, starts job1,
and writes in the Mailbox.msg file that job1 started with its process_id and
timestamp.
4. The mailman process reads this information in its Mailbox.msg file, and sends
a message that job1 started with its process_id and timestamp, to both the
Mailbox.msg file on the domain manager and the local Intercom.msg file on the
FTA.
5. The batchman process on the FTA reads the message in the Intercom.msg file
and updates the local copy of the Symphony file.
27
6. When job job1 completes processing, the jobman process updates the
Mailbox.msg file with the information that says that job1 completed.
7. The mailman process reads the information in the Mailbox.msg file, and sends
a message that job1 completed to both the Mailbox.msg file on the domain
manager and the local Intercom.msg file on the FTA.
8. The batchman process on the FTA reads the message in the Intercom.msg file,
updates the local copy of the Symphony file, and determines the next job that has
to be run.
28
JOB DEFINITION:
A job is an executable file, program, or command that is scheduled and
launched by Tivoli Workload Scheduler. You can write job definitions in edit files
and then add them to Tivoli Workload Scheduler database with the composer
command line program or we can add a job through Job Scheduling Console.
JOB STREAM:
A job stream consists of a sequences of jobs to be run, together with times,
priorities, and other dependencies that determine the order of processing.
WORKSTATION:
A workstation is a scheduling object that runs jobs. It is usually an
individual computer on which jobs and job streams are run. A workstation
definition is required for every computer that runs jobs in the IBM Tivoli
Workload Scheduler network.
DOMAIN:
A domain is a group of workstations consisting of one or more agents and a
domain manager. The domain manager acts as the management hub for the agents
in the domain. You can include multiple domain definitions in the same text file,
along with workstation definitions and workstation class definitions.
CALENDAR:
A calendar is a list of dates which define if and when a job stream runs.
TWS LOG FILES
TWS log files are present inside stdlist directory of tws home directory.
29
Startappserver
Makeplan
Switchplan
Createpostreports
Updatestatistics
30
Each job in the FINAL job stream is responsible for particular task.
STARTAPPSERVER
This job attempts to start WebSphere Application Server if it not already
running.
By this job, the following script is executed
<TWS_HOME>/wastools/startWas. By this script, the following TWS
utility is launched.
startWas - invocation of WebSphere Application Server start method.
MAKEPLAN
The following Tivoli Workload Scheduler utilities are launched from this
script:
Planman based on the information in the database, this creates a preproduction plan (also called an intermediate plan). The pre-production plan is
stored in the file called Symnew. This pre-production plan contains information
about scheduling objects (jobs, job streams, calendars, prompts, resources,
workstations, domains.) and their dependencies.
Prints preproduction reports.
SWITCHPLAN
This script invokes internally the stageman command.
SwitchPlan performs the following actions:
This job merges previous Symphony file and the new Symnew file.
It adds carry forward job streams to the pre-production plan and
thus creates the production plan.
Archievs the old plan file with the current date and time in the
schedlog directory.
31
UPDATESTATS
The following script is executed by this job:
UpdatesStats
This job performs the following tasks:
Logs job statistics (logman to update the job master with run history).
Checks the policies and if necessary extends the pre-produciton plan.
Updates the pre-production plan reporting the job stream instance states.
CREATEPOSTREPORTS
The following script is executed by this job
CreatePostReports
This script prints post-production reports.
32
33
Thus a job definition and a job stream are added to the database.
34
JOB STREAM
35
36
sbs command will schedule the job to run at the specified time.
The syntax is sbs workstationname#Jobstreamname
37
sj will display the status of the scheduled job. Here it is showing as ABEND
which
means the job is abended. Other optional status are
SUCC SUCCEED The job ran successfully
FAIL FAILURE The job fails
HOLD HOLDThe job is in hold
READY READYThe job is ready to execute
The syntax is sj workstationname#jobstreamname
38
sbd will schedule the job to execute immediately after submission. The syntax is
sbd workstationname#script to be executed;logon=user;alias=anyname.
Also we can use
sbj workstationname#jobdefinitionname to execute the same.
To view the Link between the MDM & FTA. Here the integer value 33 is
the symphony file number. It will get incremented by one number during the
production plan.
39
sc will show all the FTAs that connected to the MDM. Syntax is sc @!@
40
41
42
43
44
After giving ok, a window opens; add job definition here to execute. Then save the
file.
45
46
After defining the job stream, we have to submit it to the plan list.
Run Cycle:
Run Cycle specifies when a job stream in the database is to run in the plan.
Combinations of run cycles are used to include and exclude dates.
- Select this to open run cycle
- Select this to open to add a run cycle
In add run cycle window, we can specify the types such as
Inclusive & Exclusive Run Cycle If Inclusive is selected, job will run those
inclusive days & if Exclusive is selected, the job will run exclusive of those days.
47
48
Creating Calendar
We can create calendars to specify the date and time for the job to run.
49
Symbols
- Error. It specifies the job is abended.
- Hold. It specifies the job to be executed at later time.
- Success. It specifies the job has executed successfully.
- Ready. It specifies the job is waiting to execute within some time.
- Blocked. It specifies the job blocked other dependencies to execute.
To view the link between FTA & MDM
50
These options are specified while defining the job definition.The Recovery options
The Possible Recovery options are
STOP - If the first job fails, dependent second job will be in HOLD.
CONTINUE - If the first job fails, dependent second job will get execute.
RERUN - It is the option that if first job fails, it will try to execute once again.
We can specify these options while creating a job definition.
51
52
After adding the second job as successor to the first, the graphical view will be
TWS Troubleshooting
Resetting the production plan
ResetPlan to recover the corrupt Symphony file on the master
Follow these steps on the master domain manager:
1. Set the job "limit" to 0, using conman or the Job Scheduling Console. This
prevents jobs from launching.
2. Shut down all Tivoli Workload Scheduler processes on the master domain
manager.
3. Run ResetPlan.
4. Run JnextPlan
5. Check the created plan and ensure that you want to run all the instances it
contains, deleting those that you do not want to run.
6. Reset the job "limit" to the previous value. The Symphony file is distributed and
production recommences.
53
Solution:
If Tivoli Workload Scheduler (TWS) commands such as conman, makesec,
and dumpsec fail due to the error above it may be necessary to restore the Security
file.
Cause:
If the Security file is modified by vi or is empty, then the above error will
occur and the Security file will need to be recreated.
54
55
Contact us
G.K.Ramasubramanian ramasubramanian.krishnamoorthy@tcs.com
V.Prakash prakash2.v@tcs.com
All content / information present here is the exclusive property of Tata Consultancy Services Limited (TCS). The content / information
contained here is correct at the time of publishing.
No material from here may be copied, modified, reproduced, republished, uploaded, transmitted, posted or distributed in any form without
prior written permission from TCS. Unauthorized use of the content / information appearing here may violate copyright, trademark and other
applicable laws, and could result in criminal or civil penalties.
Copyright 2008 Tata Consultancy Services Limited