Вы находитесь на странице: 1из 56

TIVOLI WORKLOAD SCHEDULER

Version 8.4

Tivoli Workload Scheduler

Contents
Introduction
TWS Introduction
TWS Architecture
TWS Network
TWS Workstation Types
TWS Requirements
TWS Installation
TWS Configuration
TWS User Interfaces
TWS Workstation Processes
TWS Workstation Interprocess Communication
TWS Network Communication
TWS Scheduling Objects Definition
TWS Final Job Stream
TWS Job Scheduling through CLI
TWS Job Scheduling through JSC
TWS Troubleshooting

Tivoli Workload Scheduler

Introduction
SCHEDULER:
The arrangement of a number of related operations in time. There are
workload management tools to automate the tasks.
Those tools automate the scheduling and allocation of hundreds or
thousands of interactive and batch jobs among the various computers on the
network. This scheduling and allocation may be based on such criteria as time
deadlines, the completion of other jobs, the needs of particular applications.
Workload management tools must also monitor job completion status and allow
systems administrators to establish job or application priorities in order to optimize
network performance. IBM Tivoli Workload Scheduler is one of Workload
Management tool to automate the jobs.

Tivoli Workload Scheduler Introduction


IBM Tivoli Workload Scheduler is a family of IBM products that plan,
execute and track job on several platforms and environments. But IBM Tivoli
Workload Scheduler does not belong to the original Tivoli product line, it is
rebranded and modified from product called Maestro form Unison Inc. When IBM
acquired Tivoli in 1996, the program was renamed IBM Tivoli Workload
Scheduler.
Tivoli Workload Scheduler is a tool for modeling, planning, executing,
and controlling the various phases of batch workload processes occurring on z/OS,
UNIX, Windows, and Linux platforms. It is monitored from a single point of
control with the use of java console called JSC (Job Scheduling Console).

Tivoli Workload Scheduler architecture


Tivoli Workload Schedulers scheduling features help us to plan every
phase of production. During the processing day, the Tivoli Workload Scheduler
production control programs manage the production environment and automate
most operator activities. Tivoli Workload Scheduler prepares jobs for execution,
resolves interdependencies, launches and tracks each job. Because jobs start
running as soon as their dependencies are satisfied, idle time is minimized, and
throughput improves significantly. Jobs never run out of sequence, and, if a
job fails, Tivoli Workload Scheduler handles the recovery process with little or no
operator intervention.

Tivoli Workload Scheduler

Tivoli Workload Scheduler is composed of three major parts:


Tivoli Workload Scheduler engine
This is installed on every computer that should participate in a Tivoli
Workload Scheduler network. The engine is a complete Tivoli Workload
Scheduler installation, which means all Tivoli Workload Scheduler services and
components are installed on the computer.
When doing installation, the engine is configured for the role that the
computer with the engine is going to play within the Tivoli Workload Scheduler
scheduling network, such as master domain manager, domain manager, or fault
tolerant agent. The configuration of the engine role is done in two places: In
parameter files (localopts and globalopts), and in database definition for the Tivoli
Workload Scheduler workstation that represents the engine on the physical
computer. This is also called as scheduling engine.
Tivoli Workload Scheduler connector
Maps Job Scheduling Console commands to the Tivoli Workload
Scheduler engine. The Tivoli Workload Scheduler connector runs on the master
and on any of the fault tolerant agents (FTA) that you will use as backup machines
for the master workstation. The connector we use here is Web Sphere Application
Server.

Job Scheduling Console (JSC)


A Java-based graphical user interface (GUI) for the Tivoli Workload
Scheduler suite. The Job Scheduling Console runs on any machine from
which we want to manage Tivoli Workload Scheduler plan and database
objects. The Job Scheduling Console need not be required that it should be
installed on the same machine with the Tivoli Workload Scheduler engine or
connector installed. We can use the Job Scheduling Console from any machine as
long as it has a TCP/IP link with the machine running the Tivoli Workload
Scheduler connector.

Tivoli Workload Scheduler

Tivoli Workload Schedule r network


A Tivoli Workload Scheduler network is made up of the workstations on
which jobs and job streams are run.
A Tivoli Workload Scheduler network contains at least one Tivoli
Workload Scheduler domain, the master domain, in which the master domain
manager is the management hub. It is the master domain manager that manages the
databases and it is from the master domain manager that we define new objects in
the databases. Additional domains can be used to divide a widely distributed
network into smaller, locally managed groups.
In a single domain configuration, the master domain manager maintains
communication with all of the workstations (fault tolerant agents) in the Tivoli
Workload Scheduler network

Before the start of each new day, the master domain manager creates a plan
for the next 24 hours. This plan is placed in a production control file, named
Symphony. Tivoli Workload Scheduler is then restarted in the network, and the
master domain manager sends a copy of the Symphony file to each of its
automatically linked agents and subordinate domain managers. The domain
managers, in turn, send copies of the Symphony file to their automatically linked
agents and subordinate domain managers.
Once the network is started, scheduling messages like job starts and
completion are passed from the agents to their domain managers, through the
parent domain managers to the master domain manager. The master domain
manager then broadcasts the messages throughout the hierarchical tree to update
the Symphony files of domain managers and fault tolerant agents running in full
status mode.

Tivoli Workload Scheduler

Using multiple domains reduces the amount of network traffic by reducing


the communications between the master domain manager and other computers in
the network. The following figure shows a Tivoli Workload Scheduler network
with three domains. It is not necessary that MDM, Domain Manager and FTA be
installed in the same platforms.

Tivoli Workload Scheduler workstation types


For most cases, workstation definitions refer to physical workstations.
However, in the case of extended and network agents, the workstations are logical
definitions that must be hosted by a physical Tivoli Workload Scheduler
workstation.
Tivoli Workload Scheduler workstations can be of the following types:
Master domain manager (MDM)
The master domain manager in the topmost domain of a Tivoli Workload
Scheduler network. It contains the centralized database files used to
document scheduling objects. It creates the plan at the start of each day, and
performs all logging and reporting for the network. The plan is distributed to
all subordinate domain managers and fault tolerant agents.
Backup master
A fault tolerant agent or domain manager capable of assuming the
responsibilities of the master domain manager for automatic workload

Tivoli Workload Scheduler

recovery. The copy of the plan on the backup master is updated with the
same reporting and logging as the master domain manager plan.
Domain manager
All communications to and from the agents in a domain are routed through
the domain manager. The domain manager can resolve dependencies between jobs
in its subordinate agents. The copy of the plan on the domain manager is updated
with reporting and logging from the subordinate agents.
Backup domain manager
A fault tolerant agent capable of assuming the responsibilities of its domain
manager. The copy of the plan on the backup domain manager is updated with the
same reporting and logging information as the domain manager plan.
Fault tolerant agent (FTA)
A workstation capable of resolving local dependencies and launching its
jobs in the absence of a domain manager. It has a local copy of the plan generated
in the master domain manager. It is also called workstation tolerant agents.
Standard agent
A workstation that launches jobs only under the direction of its domain
manager.
Extended agent
A logical workstation definition that enables you to launch and control jobs
on other systems and applications, such as Peoplesoft, Oracle Applications,
SAP, and MVS, JES2, and JES3.
Network agent
A logical workstation definition for creating dependencies between jobs
and job streams in separate Tivoli Workload Scheduler networks.
Job Scheduling Console client
Any workstation running the graphical user interface from which
schedulers and operators can manage Tivoli Workload Scheduler plan and
database objects. Actually this is not a workstation in the Tivoli Workload
Scheduler network; the Job Scheduling Console client is where you work with the
Tivoli Workload Scheduler database and plan.

Tivoli Workload Scheduler

TWS Requirements
Hardware Requirements:
The following lists the hardware requirements for Tivoli Workload
Scheduler:
Engine
The engine may be
Master Domain Manager
Backup Master Domain Manager
Fault Tolerant Agent
Connector for distributed engine
Command line client
Disk Space Requirements:
Operating System
IBM AIX
HP-UX
Linux
Solaris

MDM with DB2


server
1375
1595
1245
1525

FTA

CONN

210
275
180
210

330
280
350
390

Temporary Storage:
Temporary file space is needed during the installation of Tivoli Workload
Scheduler
Operating System
MDM/BKM
FTA
UNIX
170
40
Microsoft Windows
70
20
Memory Requirements:
Recommended and Minimum memory requirements are given in the following
table
Memory
MDM/BKM
FTA
Recommended
2048
256
Required
1024
256

Tivoli Workload Scheduler

Job Scheduling Console


Disk Space Requirements:
Operating System
IBM AIX
HP-UX
Linux
Solaris
Microsoft Windows

JSC
100
250
110
120
100

Temporary disk space requirements


Operating System
IBM AIX
HP-UX
LINUX
Solaris
Microsoft Windows

JSC
65
210
75
90
60

Where
MDM Master Domain Manager
BKM Backup Master Domain Manager
FTA Fault Tolerant Agent
CONN Connector
JSC Job Scheduling Console

Tivoli Workload Scheduler

TWS Installation
Tivoli Workload Scheduler - V 8.4
DB2 Database
- V 9.1
Job Scheduling Console
- V 8.4
Before starting installation of TWS 8.4 in Linux, add soft link for a library
file libstdc++-libc6.1-2.so.3 with the source file libstdc++-3-libc6.2-2-2.10.0.so as
ln s libstdc++-libc6.1-2.so.3 libstdc++-3-libc6.2-2-2.10.0.so
DB2 database can be installed while installing TWS MDM. It can also be installed
separately.
Tivoli Workload Scheduler Installation
Step-by-step Installation:
Untar the TWS source file & run setup.sh

10

Tivoli Workload Scheduler

Choose Install an instance of Tivoli Workload Scheduler

Choose Master Domain Manager while installing MDM, if it is FTA choose Agent
or domain manager.

11

Tivoli Workload Scheduler

On UNIX system, user name must exist. So create TWSUser and proceed
installation.

If it is MDM, leave Master domain manager name, if it is FTA type the MDM
name

12

Tivoli Workload Scheduler

The default installation directory is users home directory to install TWS.

The Database selection window will be displayed, Select DB2 Universal Database

13

Tivoli Workload Scheduler

The Database installation action window will be displayed. Select Install


DB2 UDB Enterprise Server Edition and Administration Client, version 9.1.

If DB2 already installed separately, then select Check that an existing


instance of DB2 UDB satisfies Tivoli Workload Scheduler and map the installed
DB2 then.
If the DB2 server administrator doesnt exist it will be created automatically.

14

Tivoli Workload Scheduler

The DB2 installation directory will be displayed. Insert the preferred path.

The DB2 database configuration window will be displayed.

The Summary window will be displayed. This window contains all the information
that provided in previous steps.

15

Tivoli Workload Scheduler

The DB2 install script is needed to complete the installation. Select the path where
the script is located.

16

Tivoli Workload Scheduler

The Installation completed window will be displayed. Click Finish to end the
InstallShield Wizard.

The installation log file will be in tmp directory as shown in the screenshot.

17

Tivoli Workload Scheduler

Fix Pack
After installation and configuration, we schedule jobs in TWS. At that time, we
may face some errors while scheduling (for ex: error AWSJPL506E). This errors
are fixed by installing the fixpack 8.4.0-TIV-TWS-FP0001 in TWS V 8.4.
So download the fixpack and install the same after installing TWS 8.4.

Job Scheduling Console Installation:


Job Scheduling Console can be installed in any machine which has TCP/IP
connection with MDM.

Fault Tolerant Agent Installation:


Fault Tolerant Agent can be installed with the same source installed for
MDM, select Agent or Domain Manager instead of Master Domain Manager.

18

Tivoli Workload Scheduler

TWS Configuration
CONFIGURING MASTER DOMAIN MANAGER WORKSTATION:
After Tivoli Workload Scheduler master domain manager has been
installed, it should be configured and able to produce a new production plan on
daily basis.
The production plan is handled and extended automatically by jobs in a job
stream named FINAL. When the FINAL job stream has been added to the
database and JnextPlan run once, the FINAL job stream will be placed in the
production plan every day and will run the jobs required to generate new plan.
Sfinal file will be created automatically when we install TWS. It will be in
TWS home directory on the server where TWS master domain manager has been
installed.
The following steps describe how to add FINAL job stream to the database
and run JnextPlan command manually for the first time.
1. Login as TWS user.
2. Set the system variables. Run tws_env.sh,. Set PATH & TWS_home.
3. Run the composer command.
4. Add the final job stream definition to the database by running the
following command:
add Sfinal where Sfinal is the name of the final job stream file.
5. Exit the composer command line and run JnextPlan job.
JnextPlan for 0000. This extends the production plan by 0 hours
and 0 minutes.
6. When JnextPlan completes, check the status of Tivoli Workload
Scheduler: conman status. If Tivoli Workload Scheduler started
correctly the status will be Batchman=LIVES.
7. Raise the limit to allow jobs to run. The default job limit after
installation is 0, so no job runs. Raise the job limit to allow jobs to run.
conman lc hostname; job limit

19

Tivoli Workload Scheduler

CONFIGURING AN AGENT
1. Login to the master domain manager as TWSuser.
2. Set the system variables. Run tws_env.sh & add PATH for TWS home
directory & bin directory in TWS home, export TWS home directory as
TWS_home
3. Create FTA workstation definition in TWS database.
Type composer new to open a text editor.
Type the workstation definition in the text editor.
CPUNAME TEST.SERVER
DESCRIPTION "fault tolerant agent"
OS UNIX
NODE TCS.TEST TCPADDR 31111
DOMAIN MASTERDM
FOR MAESTRO
TYPE FTA
AUTOLINK ON
BEHINDFIREWALL OFF
FULLSTATUS OFF
END
If it is windows OS, type OS WNT

Run JnextPlan with the option -for 0000.


4. Issue the link command from the master domain manager to link the
agent and to download the Symphony file to it and set job limit:
conman link ftaname
Note: In order to establish a two-way link between a fault tolerant agent and
master domain manager, the following must be satisfied.
The master domain manager should resolve fault tolerant agents node
information.
The NODE option in the workstation definition contains the server
hostname or ip-address for the fault tolerant agent. When FTA receives the
Symphony file it will look after the MDM workstation and try to establish upward
link to the master domain manager using the server hostname or ip-address
specified in the NODE keyword for master domain manager workstation.
The NODE together with TCPADDR specifies hostname and port number
that the fault tolerant agent and master domain manager will use to establish twoway network link.

20

Tivoli Workload Scheduler

User Definition:
Users needed to be defined in database prior to the scheduling of a job only
for windows workstations.
USERNAME CPUNAME#USERNAME
PASSWORD "**********"

CONFIGURING DOMAIN WORKSTATION


1. Login to the master domain manager as TWSuser.
2. Set the system variables. Run tws_env.sh,. Set PATH & TWS_home.
3. Create FTA workstation definition in TWS database.
Type composer new to open a text editor.
Type the workstation definition in the text editor.
DOMAIN DOMAIN1
DESCRIPTION DOMAIN1
PARENT MASTERDM
END
CPUNAME TEST.SERVER
DESCRIPTION "fault tolerant agent"
OS UNIX
NODE TCS.TEST TCPADDR 31111
DOMAIN DOMAIN1
FOR MAESTRO
TYPE MANAGER
AUTOLINK ON
BEHINDFIREWALL OFF
FULLSTATUS OFF
END

Automatically starting Tivoli Workload Scheduler on UNIX

21

Tivoli Workload Scheduler

Make sure that Tivoli Workload Scheduler workstation to be automatically


started upon the server booting. Tivoli Workload Scheduler installion program
does not perform action to do this. In UNX & LINUX, Tivoli Workload Scheduler
can be started automatically by invoking the TWS StartUp command from
/etc/inittab file.
Processes Involved in TWS
if [-x TWShome/StartUp]
then
echo "netman started..."
/bin/su - TWSuser -c " TWShome/StartUp"
fi
Note: Startup script is run by TWSuser. The Startup script will start netman
process and if installed on master domain manager, it will also start WebSphere
Application Server for TWS.
The remaining Tivoli Workload Scheduler process tree can be started with:
Conman start
On windows workstations, Netman process and WAS are started when
server is booted.

Manually Starting & Stopping TWS & DB2


Starting TWS
StartUp
Conman start
Stopping TWS
Conman stop
Conman shut
DB2
Login as DB2 user, type DB2.. DB2 prompt will be opened type
DB2 start To start DB2
DB2stop To stop DB2

Managing Production Plan:

22

Tivoli Workload Scheduler

How you manage production scheduling activities with Tivoli


Workload Scheduler
Each time a new production plan is generated; Tivoli Workload Scheduler
selects the job streams that run in the time window specified for the plan, and
carries forward uncompleted job streams from the previous production plan. All
the required information are written in a file, named Symphony, which is
continually updated while processing to indicate work completed, work in
progress, and work to be done. The Tivoli Workload Scheduler conman (Console
Manager) command-line program is used to manage the information in the
Symphony file. The conman command-line program can be used to

Start and stop Tivoli Workload Scheduler control processes.


Display the status of jobs and job streams.
Alter priorities and dependencies.
Alter the job fence and job limits.
Rerun jobs.
Cancel jobs and job streams.
Submit new jobs and job streams.
Reply to prompts.
Link and unlink workstations in the Tivoli Workload Scheduler network.
Modify the number of available resources.

These actions can also be carried through Job Scheduling Console (JSC)

Tivoli Workload Scheduler user interfaces


composer
A command-line program used to define and manage scheduling objects in the
database.
conman
A command-line program used to monitor and control the Tivoli Workload
Scheduler production plan processing.

Job Scheduling Console


An interactive graphical interface used to create, modify, and delete objects in the
product database and in the plan.

Tivoli Workload Scheduler workstation processes

23

Tivoli Workload Scheduler

Netman
Monman
Writer
Mailman
Batchman
Jobman

Netman
Netman is the Network Management process. It is started by the Startup
command and it behaves like a network listener program which receives start,
stop, link, or unlink requests from the network. Netman examines each request
received and spawns a local Tivoli Workload Scheduler process.
Monman
Monman is a process started by netman and used in event management.
Starts monitoring and ssmagent services that have the task of detecting the events
defined in the event rules deployed and activated on the specific workstation.
When these services catch any such events, after a preliminary filtering action,
they send them to the event processing server that runs usually in the master
domain manager. If no event rule configurations are downloaded to the
workstation, the monitoring services stay idle.
The communication process between the monitoring agents and the event
processing server is independent of the Tivoli Workload Scheduler network
topology. It is based directly on the EIF port number of the event processor and the
event information flows directly from the monitoring agents without passing
through intermediate domain managers. A degree of fault-tolerance is guaranteed
by local cache memories that temporarily store the event occurrences on the agents
in case communication with the event processor is down.
Writer
Writer is a process started by netman to pass incoming messages to the
local mailman process. The writer processes (there might be more than one on a
domain manager workstation) are started by link requests and are stopped by
unlink requests or when the communicating mailman ends.

24

Tivoli Workload Scheduler

Mailman
Mailman is the Mail Management process. It routes messages to either
local or remote workstations. On a domain manager, additional mailman
processes can be created to divide the load on mailman due to the initialization of
agents and to improve the timeliness of messages. When the domain manager
starts up, it creates a separate mailman process instance for each ServerID
specified in the workstation definitions of the fault-tolerant agents and standard
agents it manages. Each workstation is contacted by its own ServerID on the
domain manager.
Batchman
Batchman is the Production Control process. It interacts directly with the copy
of the Symphony file distributed to the workstations at the beginning of the
production period and updates it. Batchman performs several functions:

Manages locally plan processing and updating.


Resolves dependencies of jobs and job streams.
Selects jobs to be run.
Updates the plan with the job processing results.
Batchman is the only process that can update the Symphony file.

Jobman
Jobman is the Job Management process. It launches jobs under the
direction of batchman and reports job status back to mailman. It is responsible
for tracking job state and for setting the environment as defined in scripts
jobmanrc and .jobmanrc when requesting to launch jobs. When the jobman
process receives a launch job message from batchman, it spawns a job monitor
process. The maximum number of job monitor processes that can be spawned on a
workstation is set using the limit cpu command from the conman command line
prompt.

TWS Workstation inter-process communication


Tivoli Workload Scheduler uses message queues for local inter-process
communication. There are four message files, which reside in the TWS_home
directory:
NetReq.msg
This message file is read by the netman process for local commands. It
receives messages such as START, STOP, LINK, and UNLINK.

25

Tivoli Workload Scheduler

Mailbox.msg
This message file is read by the mailman process. It receives messages
from the local batchman and jobman processes, from both the Job Scheduling
Console and the console manager conman, and from other Tivoli Workload
Scheduler workstations in the network.
Intercom.msg
This message file is read by the batchman process and contains
instructions sent by the local mailman process.
Courier.msg
This message file is written by the batchman process and read by the
jobman process.
These message files are present in TWS home directory & be sure that the
size should not cross beyond 48 K.

Tivoli Workload Scheduler Network Communication


Connection initialization and two-ways communication setup These are the
steps involved in the establishment of a two-ways Tivoli Workload Scheduler link
between a domain manager and a remote FTA:
1. On the domain managers the mailman process reads the host name, TCP/IP
address, and port number of the FTA from the Symphony file.
2. The mailman process on the domain manager establishes a TCP/IP connection
to the netman process on the FTA using the information obtained from the
Symphony file.
3. The netman process on the FTA determines that the request is coming from the
mailman process on the domain manager, and spawns a new writer process to
handle incoming connection.
4. The mailman process on the domain manager is now connected to the writer
process on the FTA. The writer process on the FTA communicates the current run
number of its copy of the Symphony file to the mailman process on the domain
manager. This run number is the identifier used by Tivoli Workload Scheduler to
recognize each Symphony file generated by JnextPlan. This step is necessary for
the domain manager to check if the current plan has already been sent to the FTA.
5. The mailman process on the domain manager compares its Symphony file run
number with the run number of the Symphony file on the FTA. If the run numbers
are different, the mailman process on the domain manager sends to the writer
process on the FTA the latest copy of the Symphony file.

26

Tivoli Workload Scheduler

6. When the current Symphony file is in place on the FTA, the mailman process
on the domain manager sends a start command to the FTA.
7. The netman process on the FTA starts the local mailman process. At this point
a one-way communication link is established from the domain manager to the
FTA.
8. The mailman process on the FTA reads the host name, TCP/IP address, and
port number of the domain manager from the Symphony file and use them to
establish the uplink back to the netman process on the domain manager.
9. The netman process on the domain manager determines that the request is
coming from the mailman process on the FTA, and spawns a new writer process
to handle the incoming connection. The mailman process on the FTA is now
connected to the writer on the domain manager and a full two-way
communication link has been established. As a result of this, the writer process on
the domain manager writes messages received from the FTA to the Mailbox.msg
file on the domain manager, and the writer process on the FTA writes messages
from the domain manager to the Mailbox.msg file on the FTA.
During Production Period
During the production period, the Symphony file present on the FTA is
read and updated with the state change information about jobs that are run locally
by the Tivoli Workload Scheduler workstation processes.
These are the steps that are performed locally on the FTA to read and update the
Symphony file and to process jobs:
1. The batchman process reads a record in the Symphony file that says that job1 is
to be launched on the workstation.
2. The batchman process writes in the Courier.msg file that job1 has to start.
3. The jobman process reads this information in the Courier.msg file, starts job1,
and writes in the Mailbox.msg file that job1 started with its process_id and
timestamp.
4. The mailman process reads this information in its Mailbox.msg file, and sends
a message that job1 started with its process_id and timestamp, to both the
Mailbox.msg file on the domain manager and the local Intercom.msg file on the
FTA.
5. The batchman process on the FTA reads the message in the Intercom.msg file
and updates the local copy of the Symphony file.

27

Tivoli Workload Scheduler

6. When job job1 completes processing, the jobman process updates the
Mailbox.msg file with the information that says that job1 completed.
7. The mailman process reads the information in the Mailbox.msg file, and sends
a message that job1 completed to both the Mailbox.msg file on the domain
manager and the local Intercom.msg file on the FTA.
8. The batchman process on the FTA reads the message in the Intercom.msg file,
updates the local copy of the Symphony file, and determines the next job that has
to be run.

28

Tivoli Workload Scheduler

Definition of Scheduling Objects:


Scheduling objects are managed with the composer command-line program
or JSC and are stored in the Tivoli Workload Scheduler database. The composer
command-line program or JSC can be installed and used on any machine
connected through TCP/IP to the system where the master domain manager &
Database is installed.

JOB DEFINITION:
A job is an executable file, program, or command that is scheduled and
launched by Tivoli Workload Scheduler. You can write job definitions in edit files
and then add them to Tivoli Workload Scheduler database with the composer
command line program or we can add a job through Job Scheduling Console.
JOB STREAM:
A job stream consists of a sequences of jobs to be run, together with times,
priorities, and other dependencies that determine the order of processing.
WORKSTATION:
A workstation is a scheduling object that runs jobs. It is usually an
individual computer on which jobs and job streams are run. A workstation
definition is required for every computer that runs jobs in the IBM Tivoli
Workload Scheduler network.
DOMAIN:
A domain is a group of workstations consisting of one or more agents and a
domain manager. The domain manager acts as the management hub for the agents
in the domain. You can include multiple domain definitions in the same text file,
along with workstation definitions and workstation class definitions.

CALENDAR:
A calendar is a list of dates which define if and when a job stream runs.
TWS LOG FILES
TWS log files are present inside stdlist directory of tws home directory.

29

Tivoli Workload Scheduler

TWS FINAL JOB STREAM


VIEW OF FINAL JOB STREAM SCHEDULE

The FINAL job stream consists of the following jobs:

Startappserver
Makeplan
Switchplan
Createpostreports
Updatestatistics

30

Tivoli Workload Scheduler

Each job in the FINAL job stream is responsible for particular task.
STARTAPPSERVER
This job attempts to start WebSphere Application Server if it not already
running.
By this job, the following script is executed
<TWS_HOME>/wastools/startWas. By this script, the following TWS
utility is launched.
startWas - invocation of WebSphere Application Server start method.
MAKEPLAN
The following Tivoli Workload Scheduler utilities are launched from this
script:
Planman based on the information in the database, this creates a preproduction plan (also called an intermediate plan). The pre-production plan is
stored in the file called Symnew. This pre-production plan contains information
about scheduling objects (jobs, job streams, calendars, prompts, resources,
workstations, domains.) and their dependencies.
Prints preproduction reports.
SWITCHPLAN
This script invokes internally the stageman command.
SwitchPlan performs the following actions:

Stops Tivoli Workload Scheduler processes.

This job merges previous Symphony file and the new Symnew file.
It adds carry forward job streams to the pre-production plan and
thus creates the production plan.

Archievs the old plan file with the current date and time in the
schedlog directory.

Creates a copy of the Symphony file to distribute to the


workstations.

Restart Tivoli Workload Scheduler processes which distribute the


copy of the Symphony file to the workstation targets for running
the jobs in plan.

31

Tivoli Workload Scheduler

UPDATESTATS
The following script is executed by this job:
UpdatesStats
This job performs the following tasks:
Logs job statistics (logman to update the job master with run history).
Checks the policies and if necessary extends the pre-produciton plan.
Updates the pre-production plan reporting the job stream instance states.
CREATEPOSTREPORTS
The following script is executed by this job
CreatePostReports
This script prints post-production reports.

TWS Job SCHEDULING THROUGH CLI


Tivoli Workload Scheduler COMPOSER & CONMAN commands:
TWS COMPOSER commands:
To Create Job Definition:
Go to composer mode

32

Tivoli Workload Scheduler

Open a text editor


vi new

Here LINUXCOE1 CPU Name (Host Name) of the Workstation


COPIES
- Job Definition Name
DOCOMMAND The script or command to be executed
STREAMLOGON The user name
TASKTYPE Operating System
RECOVERY The state for the dependency job should run or stop.
To create Job Stream
Go to composer mode

33

Tivoli Workload Scheduler

Open a text editor


vi new1

Here LINXCOE1 NODE or HostName of the Workstation


COPIESJS Job Stream Name
RUNCYCLE To describe whether the job to be run daily or weekly or
monthly..
AT
- By what time the job should execute
COPIES Job Definition Name that to be executed.
EVERY The Repeat Range
Adding Job Definition & Job Stream to the Database

Thus a job definition and a job stream are added to the database.

34

Tivoli Workload Scheduler

Viewing the added Job Definition and Job Stream


JOB DEFINITION

JOB STREAM

35

Tivoli Workload Scheduler

To view the Workstation added in the Database

To Modify the added Job Definition, Job Stream, Workstation


Type modi instead of display
Likewise, to delete the Job Definition, Job Stream or Workstation from the
database type del instead of display.
To view all Job Definitions, Job Streams, Workstations added in the database type
@ in the place of all CPUs, Job Definition and Job Stream

This will display job stream in all workstations.

36

Tivoli Workload Scheduler

TWS CONMAN COMMANDS:


To schedule a job that should execute at the defined time. Go to conman mode

sbs command will schedule the job to run at the specified time.
The syntax is sbs workstationname#Jobstreamname

To view the status of the Job scheduled

37

Tivoli Workload Scheduler

sj will display the status of the scheduled job. Here it is showing as ABEND
which
means the job is abended. Other optional status are
SUCC SUCCEED The job ran successfully
FAIL FAILURE The job fails
HOLD HOLDThe job is in hold
READY READYThe job is ready to execute
The syntax is sj workstationname#jobstreamname

sj workstationname#Jobstreamname;stdlist will display the complete details of


the scheduled job.

38

Tivoli Workload Scheduler

To schedule a job to execute immediately (adhoc submission)

sbd will schedule the job to execute immediately after submission. The syntax is
sbd workstationname#script to be executed;logon=user;alias=anyname.
Also we can use
sbj workstationname#jobdefinitionname to execute the same.

To view the Link between the MDM & FTA. Here the integer value 33 is
the symphony file number. It will get incremented by one number during the
production plan.

39

Tivoli Workload Scheduler

sc will show all the FTAs that connected to the MDM. Syntax is sc @!@

40

Tivoli Workload Scheduler

TWS Job Scheduling through Job Scheduling Console


A Java-based graphical user interface (GUI) for the Tivoli Workload
Scheduler suite. The Job Scheduling Console is an interface for creating,
modifying, monitoring, controlling and deleting Tivoli Workload Scheduler
objects.
Creating New Engine
To create a new engine, click New Engine in JSC.
Engine Name Type the MDM Engine Name
Specify Engine Type Select Distributed
Host Name - Type the Hostname of the Engine
Port Number By default connected to Port Number 31117
User Name Type MDM user name
Password Type MDM user name password

41

Tivoli Workload Scheduler

Creating a New Workstation


When you install a FTA or any other workstation, add the workstation as
follows:

Click New Workstation


Name Type name of the FTA
Domain Type the Domain name to which FTA should connect under.
Workstation Type Select Workstation-type
Operation System Select the Operating System
Time Zone Select the Time Zone for FTA.
Node Name Type Hostname of the FTA
TCP Port By default FTA connect through the port number 31111

42

Tivoli Workload Scheduler

Creating Job Definition:


Select General tab to select OS type, give Job Definition Name, Workstation, and
Login name.
Select Task tab to select the script or command to run & type the command to be
done.

43

Tivoli Workload Scheduler

Creating Job Stream


Select General Tab, Type Job Stream Name, Select workstation.
Select Time Restriction Tab, specify start time, so that the job will be executed at
that time.

44

Tivoli Workload Scheduler

After giving ok, a window opens; add job definition here to execute. Then save the
file.

45

Tivoli Workload Scheduler

- Select this to add job definition to the job stream.

By Specifying Repeat Range the job will be executed repeatedly at mentioned


time. Mentioned this in the properties of the job.

46

Tivoli Workload Scheduler

After defining the job stream, we have to submit it to the plan list.

Run Cycle:
Run Cycle specifies when a job stream in the database is to run in the plan.
Combinations of run cycles are used to include and exclude dates.
- Select this to open run cycle
- Select this to open to add a run cycle
In add run cycle window, we can specify the types such as

Simple The job runs in the days specified simply


Calendar Insert calendar so that the job run in specified days in calendar
Daily - The job runs daily at the specified time
Weekly The job runs in the days specified in weekly

Inclusive & Exclusive Run Cycle If Inclusive is selected, job will run those
inclusive days & if Exclusive is selected, the job will run exclusive of those days.

47

Tivoli Workload Scheduler

48

Tivoli Workload Scheduler

Creating Calendar
We can create calendars to specify the date and time for the job to run.

49

Tivoli Workload Scheduler

To view the job stream status

Symbols
- Error. It specifies the job is abended.
- Hold. It specifies the job to be executed at later time.
- Success. It specifies the job has executed successfully.
- Ready. It specifies the job is waiting to execute within some time.
- Blocked. It specifies the job blocked other dependencies to execute.
To view the link between FTA & MDM

50

Tivoli Workload Scheduler

TWS Job Dependency


We can specify Job Dependencies so that a job will start after the
completion of another job. Therefore, we are having following options for
Dependencies

If the first job succeeded, second job get execute


If the first job fails, second job shouldnt execute
If the first job fails, second job should execute

These options are specified while defining the job definition.The Recovery options
The Possible Recovery options are
STOP - If the first job fails, dependent second job will be in HOLD.
CONTINUE - If the first job fails, dependent second job will get execute.
RERUN - It is the option that if first job fails, it will try to execute once again.
We can specify these options while creating a job definition.

51

Tivoli Workload Scheduler

Therefore, we can create job dependency for two independent jobs.


Likewise we can create 40 dependencies in TWS 8.4. Create this job dependency
through Job Stream in JSC.
Adding job dependencies:
Open job stream editor, add two independent jobs, and go to explorer
mode. Add the second job as successor to the first job, so that the second job will
be dependent on the first one.
The dependent job may run in any workstation, it is not necessary that
dependencies should be in a single workstation.
For example, here two independent jobs JOB1 in LINUXCOE1 & SOLRM
in SOLARIS1 are running. JOB1 is first job & SOLRM is second job. We can
make the second job as dependent to first job. Click Explorer; add the second job
as successor to the first one.

52

Tivoli Workload Scheduler

After adding the second job as successor to the first, the graphical view will be

TWS Troubleshooting
Resetting the production plan
ResetPlan to recover the corrupt Symphony file on the master
Follow these steps on the master domain manager:
1. Set the job "limit" to 0, using conman or the Job Scheduling Console. This
prevents jobs from launching.
2. Shut down all Tivoli Workload Scheduler processes on the master domain
manager.
3. Run ResetPlan.
4. Run JnextPlan
5. Check the created plan and ensure that you want to run all the instances it
contains, deleting those that you do not want to run.
6. Reset the job "limit" to the previous value. The Symphony file is distributed and
production recommences.

53

Tivoli Workload Scheduler

SYNTAX: ResetPlan [-scratch]. This should be run by the root user.


If executed with scratch option:
ResetPlan removes all of the plan information allowing the user to create a
new plan as if no plan was present before.
If executed without scratch option:
ResetPlan updates and preserves the information about completed (or
cancelled) job stream instances.
This allows the user to create a new plan with only job stream instances in
the specified time frame (-from, -to) that were not completed before.
Partially executed job streams still need to be manually modified to cancel
jobs that were already executed and that cannot be executed twice.
TWS commands fail with AWSDEJ027E ... Security file is empty or corrupt

Solution:
If Tivoli Workload Scheduler (TWS) commands such as conman, makesec,
and dumpsec fail due to the error above it may be necessary to restore the Security
file.
Cause:
If the Security file is modified by vi or is empty, then the above error will
occur and the Security file will need to be recreated.

54

Tivoli Workload Scheduler

Resolving the Problem:


Recreate the security file:
1. Login as root
2. cd ~twsuser
3. . ./tws_env.sh
4. mv Security Security.bad
5. makesec Security.conf

55

About Tata Consultancy Services (TCS )


Tata Consultancy Services is an IT services, business solutions and outsourcing
organization that delivers real results to global businesses, ensuring a level of
certainty no other firm can match. TCS offers a consulting-led, integrated portfolio
of IT and IT-enabled services delivered through its unique Global Network
Delivery ModelTM recognized as the benchmark of excellence in software
development.
A part of the Tata Group, Indias largest industrial conglomerate, TCS has over
100,000 of the world's best trained IT consultants in 50 countries. The company
generated consolidated revenues of US $5.7 billion for fiscal year ended 31 March
2008 and is listed on the National Stock Exchange and Bombay Stock Exchange in
India. For more information, visit us at www.tcs.com.

Contact us
G.K.Ramasubramanian ramasubramanian.krishnamoorthy@tcs.com
V.Prakash prakash2.v@tcs.com

All content / information present here is the exclusive property of Tata Consultancy Services Limited (TCS). The content / information
contained here is correct at the time of publishing.
No material from here may be copied, modified, reproduced, republished, uploaded, transmitted, posted or distributed in any form without
prior written permission from TCS. Unauthorized use of the content / information appearing here may violate copyright, trademark and other
applicable laws, and could result in criminal or civil penalties.
Copyright 2008 Tata Consultancy Services Limited

Вам также может понравиться