Driving Behavioral Change for Information Management through Data-Driven Gree...
End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
1. Front cover
End-to-End Scheduling
with IBM Tivoli Workload
kload
Scheduler V 8.2
Plan and implement your end-to-end
scheduling environment
Experiment with real-life
scenarios
Learn best practices and
troubleshooting
Vasfi Gucer
Michael A. Lowry
Finn Bastrup Knudsen
ibm.com/redbooks
2.
3. International Technical Support Organization
End-to-End Scheduling with IBM Tivoli Workload
Scheduler V 8.2
September 2004
SG24-6624-00
12. Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® NetView® ServicePac®
AS/400® OS/390® Tivoli®
HACMP™ OS/400® Tivoli Enterprise Console®
IBM® RACF® TME®
Language Environment® Redbooks™ VTAM®
Maestro™ Redbooks (logo) ™ z/OS®
MVS™ S/390® zSeries®
The following terms are trademarks of other companies:
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel is a trademark of Intel Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, and service names may be trademarks or service marks of others.
x End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
14. Before moving to Sweden, he worked in Austin for Apple, IBM, and the IBM Tivoli
Workload Scheduler Support Team at Tivoli Systems. He has five years of
experience with Tivoli Workload Scheduler and has extensive experience with
IBM network and storage management products. He is also an IBM Certified
AIX® Support Professional.
Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology
Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12
years of experience working with IBM Tivoli Workload Scheduler for z/OS®
(OPC) and four years of experience working with IBM Tivoli Workload Scheduler.
Finn primarily does consultation and services at customer sites, as well as IBM
Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler training.
He is a certified Tivoli Instructor in IBM Tivoli Workload Scheduler for z/OS and
IBM Tivoli Workload Scheduler. He has worked at IBM for 13 years. His areas of
expertise include IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli
Workload Scheduler.
Also thanks to the following people for their contributions to this project:
International Technical Support Organization, Austin Center
Budi Darmawan and Betsy Thaggard
IBM Italy
Angelo D'ambrosio, Paolo Falsi, Antonio Gallotti, Pietro Iannucci, Valeria
Perticara
IBM USA
Robert Haimowitz, Stephen Viola
IBM Germany
Stefan Franke
Notice
This publication is intended to help Tivoli specialists implement an end-to-end
scheduling environment with IBM Tivoli Workload Scheduler 8.2. The information
in this publication is not intended as the specification of any programming
interfaces that are provided by Tivoli Workload Scheduler 8.2. See the
PUBLICATIONS section of the IBM Programming Announcement for Tivoli
Workload Scheduler 8.2 for more information about what publications are
considered to be product documentation.
xii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
15. Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook
dealing with specific products or solutions, while getting hands-on experience
with leading-edge technologies. You will team with IBM technical professionals,
Business Partners, and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us. We want our Redbooks™ to be as helpful as
possible. Send us your comments about this or other Redbooks in one of the
following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
redbook@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. JN9B Building 905 Internal Zip 2834
11501 Burnet Road
Austin, Texas 78758-3493
Preface xiii
16. xiv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
18. 1.1 Job scheduling
Scheduling is the nucleus of the data center. Orderly, reliable sequencing and
management of process execution is an essential part of IT management. The IT
environment consists of multiple strategic applications, such as SAP/3 and
Oracle, payroll, invoicing, e-commerce, and order handling. These applications
run on many different operating systems and platforms. Legacy systems must be
maintained and integrated with newer systems.
Workloads are increasing, accelerated by electronic commerce. Staffing and
training requirements increase, and many platform experts are needed. There
are too many consoles and no overall point of control. Constant (24x7) availability
is essential and must be maintained through migrations, mergers, acquisitions,
and consolidations.
Dependencies exist between jobs in different environments. For example, a
customer can use a Web browser to fill out an order form that triggers a UNIX®
job that acknowledges the order, an AS/400® job that orders parts, a z/OS job
that debits the customer’s bank account, and a Windows NT® job that prints an
invoice and address label. Each job must run only after the job before it has
completed.
The IBM Tivoli Workload Scheduler Version 8.2 suite provides an integrated
solution for running this kind of complicated workload. Its Job Scheduling
Console provides a centralized point of control and unified interface for managing
the workload regardless of the platform or operating system on which the jobs
run.
The Tivoli Workload Scheduler 8.2 suite includes IBM Tivoli Workload Scheduler,
IBM Tivoli Workload Scheduler for z/OS, and the Job Scheduling Console. Tivoli
Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used
separately or together.
End-to-end scheduling means using both products together, with an IBM
mainframe acting as the scheduling controller for a network of other
workstations.
Because Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS have
different histories and work on different platforms, someone who is familiar with
one of the programs may not be familiar with the other. For this reason, we give a
short introduction to each product separately and then proceed to discuss how
the two programs work together.
2 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
19. 1.2 Introduction to end-to-end scheduling
End-to-end scheduling means scheduling workload across all computing
resources in your enterprise, from the mainframe in your data center, to the
servers in your regional headquarters, all the way to the workstations in your
local office. The Tivoli Workload Scheduler end-to-end scheduling solution is a
system whereby scheduling throughout the network is defined, managed,
controlled, and tracked from a single IBM mainframe or sysplex.
End-to-end scheduling requires using two different programs: Tivoli Workload
Scheduler for z/OS on the mainframe, and Tivoli Workload Scheduler on other
operating systems (UNIX, Windows®, and OS/400®). This is shown in
Figure 1-1.
MASTERDM
Tivoli
Master Domain z/OS Workload
Manager Scheduler
OPCMASTER for z/OS
DomainA DomainB
AIX
HPUX
Domain Domain
Manager Manager
DMA DMB Tivoli
Workload
Scheduler
FTA1 FTA2 FTA3 FTA4
Linux OS/400 Windows XP Solaris
Figure 1-1 Both schedulers are required for end-to-end scheduling
Despite the similar names, Tivoli Workload Scheduler for z/OS and Tivoli
Workload Scheduler are quite different and have distinct histories. IBM Tivoli
Workload Scheduler for z/OS was originally called OPC. It was developed by IBM
in the early days of the mainframe. IBM Tivoli Workload Scheduler was originally
developed by a company called Unison Software. Unison was purchased by
Tivoli, and Tivoli was then purchased by IBM.
Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have slightly
different ways of working, and programs have many features in common. IBM
has continued development of both programs toward the goal of providing closer
Chapter 1. Introduction 3
20. and closer integration between them. The reason for this integration is simple: to
facilitate an integrated scheduling system across all operating systems.
It should be obvious that end-to-end scheduling depends on using the mainframe
as the central point of control for the scheduling network. There are other ways to
integrate scheduling between z/OS and other operating systems. We will discuss
these in the following sections.
1.3 Introduction to Tivoli Workload Scheduler for z/OS
IBM Tivoli Workload Scheduler for z/OS has been scheduling and controlling
batch workloads in data centers since 1977. Originally called Operations
Planning and Control (OPC), the product has been extensively developed and
extended to meet the increasing demands of customers worldwide. An overnight
workload consisting of 100,000 production jobs is not unusual, and Tivoli
Workload Scheduler for z/OS can easily manage this kind of workload.
1.3.1 Overview of Tivoli Workload Scheduler for z/OS
IBM Tivoli Workload Scheduler for z/OS databases contain all of the information
about the work that is to be run, when it should run, and the resources that are
needed and available. This information is used to calculate a forecast called the
long-term plan. Data center staff can check this to confirm that the desired work
is being scheduled when required. The long-term plan usually covers a time
range of four to twelve weeks. The current plan is produced based on the
long-term plan and the databases. The current plan usually covers 24 hours and
is a detailed production schedule. Tivoli Workload Scheduler for z/OS uses the
current plan to submit jobs to the appropriate processor at the appropriate time.
All jobs in the current plan have Tivoli Workload Scheduler for z/OS status codes
that indicate the progress of work. When a job’s predecessors are complete,
Tivoli Workload Scheduler for z/OS considers it ready for submission. It verifies
that all requested resources are available, and when these conditions are met, it
causes the job to be submitted.
1.3.2 Tivoli Workload Scheduler for z/OS architecture
IBM Tivoli Workload Scheduler for z/OS consists of a controller and one or more
trackers. The controller, which runs on a z/OS system, manages the Tivoli
Workload Scheduler for z/OS and the long term and current plans. The controller
schedules work and causes jobs to be submitted to the appropriate system at the
appropriate time.
4 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
21. Trackers are installed on every system managed by the controller. The tracker is
the link between the controller and the managed system. The tracker submits
jobs when the controller instructs it to do so, and it passes job start and job end
information back to the controller.
The controller can schedule jobs on z/OS system using trackers or on other
operating systems using fault-tolerant agents (FTAs). FTAs can be run on many
operating systems, including AIX, Linux®, Solaris, HP-UX, OS/400, and
Windows. FTAs run IBM Tivoli Workload Scheduler, formerly called Maestro.
The most common way of working with the controller is via ISPF panels.
However, several other methods are available, including Program Interfaces,
TSO commands, and the Job Scheduling Console.
The Job Scheduling Console (JSC) is a Java™-based graphical user interface for
controlling and monitoring workload on the mainframe and other platforms. The
first version of JSC was released at the same time as Tivoli OPC Version 2.3.
The current version of JSC (1.3) has been updated with several new functions
specific to Tivoli Workload Scheduler for z/OS. JSC provides a common interface
to both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler.
For more information about IBM Tivoli Workload Scheduler for z/OS architecture,
see Chapter 2, “End-to-end scheduling architecture” on page 25.
1.4 Introduction to Tivoli Workload Scheduler
IBM Tivoli Workload Scheduler is descended from the Unison Maestro program.
Unison Maestro was developed by Unison Software on the Hewlett-Packard MPE
operating system. It was then ported to UNIX and Windows. In its various
manifestations, Tivoli Workload Scheduler has a 17-year track record. During the
processing day, Tivoli Workload Scheduler manages the production environment
and automates most operator activities. It prepares jobs for execution, resolves
interdependencies, and launches and tracks each job. Because jobs begin as
soon as their dependencies are satisfied, idle time is minimized. Jobs never run
out of sequence. If a job fails, IBM Tivoli Workload Scheduler can handle the
recovery process with little or no operator intervention.
1.4.1 Overview of IBM Tivoli Workload Scheduler
As with IBM Tivoli Workload Scheduler for z/OS, there are two basic aspects to
job scheduling in IBM Tivoli Workload Scheduler: The database and the plan.
The database contains all definitions for scheduling objects, such as jobs, job
streams, resources, and workstations. It also holds statistics of job and job
stream execution, as well as information on the user ID that created an object
Chapter 1. Introduction 5
22. and when an object was last modified. The plan contains all job scheduling
activity planned for a period of one day. In IBM Tivoli Workload Scheduler, the
plan is created every 24 hours and consists of all the jobs, job streams, and
dependency objects that are scheduled to execute for that day. Job streams that
do not complete successfully can be carried forward into the next day’s plan.
1.4.2 IBM Tivoli Workload Scheduler architecture
A typical IBM Tivoli Workload Scheduler network consists of a master domain
manager, domain managers, and fault-tolerant agents. The master domain
manager, sometimes referred to as just the master, contains the centralized
database files that store all defined scheduling objects. The master creates the
plan, called Symphony, at the start of each day.
Each domain manager is responsible for distribution of the plan to the
fault-tolerant agents (FTAs) in its domain. A domain manager also handles
resolution of dependencies between FTAs in its domain.
FTAs are the workhorses of a Tivoli Workload Scheduler network. FTAs are
where most jobs are run. As their name implies, fault-tolerant agents are fault
tolerant. This means that in the event of a loss of communication with the domain
manager, FTAs are capable of resolving local dependencies and launching their
jobs without interruption. FTAs are capable of this because each FTA has its own
copy of the plan. The plan contains a complete set of scheduling instructions for
the production day. Similarly, a domain manager can resolve dependencies
between FTAs in its domain even in the event of a loss of communication with the
master, because the domain manager’s plan receives updates from all
subordinate FTAs and contains the authoritative status of all jobs in that domain.
The master domain manager is updated with the status of all jobs in the entire
IBM Tivoli Workload Scheduler network. Logging and monitoring of the IBM Tivoli
Workload Scheduler network is performed on the master.
Starting with Tivoli Workload Scheduler Version 7.0, a new Java-based graphical
user interface was made available to provide an easy-to-use interface to Tivoli
Workload Scheduler. This new GUI is called Job Scheduling Console (JSC). The
current version of JSC has been updated with several functions specific to Tivoli
Workload Scheduler. The JSC provides a common interface to both Tivoli
Workload Scheduler and Tivoli Workload Scheduler for z/OS.
For more about IBM Tivoli Workload Scheduler architecture, see Chapter 2,
“End-to-end scheduling architecture” on page 25.
6 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
23. 1.5 Benefits of integrating Tivoli Workload Scheduler for
z/OS and Tivoli Workload Scheduler
Both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have
individual strengths. While an enterprise running mainframe and non-mainframe
systems could schedule and control work using only one of these tools or using
both tools separately, a complete solution requires that Tivoli Workload
Scheduler for z/OS and Tivoli Workload Scheduler work together.
The Tivoli Workload Scheduler for z/OS long-term plan gives peace of mind by
showing the workload forecast weeks or months into the future. Tivoli Workload
Scheduler fault-tolerant agents go right on running jobs even if they lose
communication with the domain manager. Tivoli Workload Scheduler for z/OS
manages huge numbers of jobs through a sysplex of connected z/OS systems.
Tivoli Workload Scheduler extended agents can control work on applications
such as SAP R/3 and Oracle.
Many data centers need to schedule significant amounts of both mainframe and
non-mainframe jobs. It is often desirable to have a single point of control for
scheduling on all systems in the enterprise, regardless of platform, operating
system, or application. These businesses would probably benefit from
implementing the end-to-end scheduling configuration. End-to-end scheduling
enables the business to make the most of its computing resources.
That said, the end-to-end scheduling configuration is not necessarily the best
way to go for every enterprise. Some computing environments would probably
benefit from keeping their mainframe and non-mainframe schedulers separate.
Others would be better served by integrating the two schedulers in a different
way (for example, z/OS [or MVS™] extended agents). Enterprises with a majority
of jobs running on UNIX and Windows servers might not want to cede control of
these jobs to the mainframe. Because the end-to-end solution involves software
components on both mainframe and non-mainframe systems, there will have to
be a high level of cooperation between your mainframe operators and your UNIX
and Windows system administrators. Careful consideration of the requirements
of end-to-end scheduling is necessary before going down this path.
There are also several important decisions that must be made before beginning
an implementation of end-to-end scheduling. For example, there is a trade-off
between centralized control and fault tolerance. Careful planning now can save
you time and trouble later. In Chapter 3, “Planning end-to-end scheduling with
Tivoli Workload Scheduler 8.2” on page 109, we explain in detail the decisions
that must be made prior to implementation. We strongly recommend that you
read this chapter in full before beginning any implementation.
Chapter 1. Introduction 7
24. 1.6 Summary of enhancements in V8.2 related to
end-to-end scheduling
Version 8.2 is the latest version of both IBM Tivoli Workload Scheduler and IBM
Tivoli Workload Scheduler for z/OS. In this section we cover the new functions
that affect end-to-end scheduling in three categories.
1.6.1 New functions related with performance and scalability
Several features are now available with IBM Tivoli Workload Scheduler for z/OS
8.2 that directly or indirectly affect performance.
Multiple first-level domain managers
In IBM Tivoli Workload Scheduler for z/OS 8.1, there was a limitation of only one
first-level domain manager (called the primary domain manager). In Version 8.2,
you can have multiple first-level domain managers (that is, the level immediately
below OPCMASTER). See Figure 1-2 on page 9.
This allows greater flexibility and scalability and eliminates a potential
performance bottleneck. It also allows greater freedom in defining your Tivoli
Workload Scheduler distributed network.
8 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
25. OPCMASTER z/OS
Master
Domain
Manager
DomainZ DomainY
AIX AIX
Domain Domain
Manager Manager
DMZ DMY
DomainA DomainB DomainC
HPUX
AIX HPUX
Domain Domain Domain
Manager Manager Manager
DMA DMB DMC
FTA1 FTA2 FTA3 FTA4
AIX Linux Windows 2000 Solaris
Figure 1-2 IBM Tivoli Workload Scheduler network with two first-level domains
Improved SCRIPTLIB parser
The job definitions for non-centralized scripts are kept in members in the
SCRPTLIB data set (EQQSCLIB DD statement). The definitions are specified in
keywords and parameter definitions. See example below:
Example 1-1 SCRPTLIB dataset
BROWSE TWS.INST.SCRPTLIB(AIXJOB01) - 01.08 Line 00000000 Col 001
Command ===> Scroll ===>
********************************* Top of Data *****************************
/* Job to be executed on AIX machines */
VARSUB
TABLES(FTWTABLE)
PREFIX('&')
VARFAIL(YES)
TRUNCATE(NO)
JOBREC
JOBSCR('&TWSHOME./scripts/return_rc.sh 2')
RCCONDSUCC('(RC=4) OR (RC=6)')
RECOVERY
OPTION(STOP)
MESSAGE('Reply Yes when OK to continue')
Chapter 1. Introduction 9
26. ******************************** Bottom of Data ***************************
The information in the SCRPTLIB member must be parsed every time a job is
added to the Symphony file (both at Symphony creation or dynamically).
In IBM Tivoli Workload Scheduler 8.1, the TSO parser was used, but this caused
a major performance issue: up to 70% of the time that it took to create a
Symphony file was spent parsing the SCRIPTLIB library members. In Version
8.2, a new parser has been implemented that significantly reduces the parsing
time and consequently the Symphony file creation time.
Check server status before Symphony file creation
In an end-to-end configuration, daily planning batch jobs require that both the
controller and server are active to be able to synchronize all the tasks and avoid
unprocessed events being left in the event files. If the server is not active the
daily planning batch process now fails at the beginning to avoid pointless extra
processing. Two new log messages show the status of the end-to-end server:
EQQ3120E END-TO-END SERVER NOT AVAILABLE
EQQZ193I END-TO-END TRANSLATOR SERVER PROCESS IS NOW AVAILABLE
Improved job log retrieval performance
In IBM Tivoli Workload Scheduler 8.1, the thread structure of the Translator
process implied that only usual incoming events were immediately notified to the
controller; job log events were detected by the controller only when another event
arrived or after a 30-second timeout.
In IBM Tivoli Workload Scheduler 8.2, a new input-writer thread has been
implemented that manages the writing of events to the input queue and takes
input from both the input translator and the job log retriever. This enables the job
log retriever to test whether there is room on the input queue and if not, it loops
until enough space is available. Meanwhile the input translator can continue to
write its smaller events to the queue.
1.6.2 General enhancements
In this section, we cover enhancements in the general category.
Centralized Script Library Management
In order to ease the migration path from OPC tracker agents to IBM Tivoli
Workload Scheduler Distributed Agents, a new function has been introduced in
Tivoli Workload Scheduler 8.2 called Centralized Script Library Management (or
Centralized Scripting). It is now possible to use the Tivoli Workload Scheduler for
z/OS engine as the centralized repository for scripts of distributed jobs.
10 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
27. Centralized script is stored in the JOBLIB and it provides features that were on
OPC tracker agents such as:
JCL Editing
Variable substitution and Job Setup
Automatic Recovery
Support for usage of the job-submit exit (EQQUX001)
Note: Centralized script feature is not supported for fault tolerant jobs running
on an AS/400 fault tolerant agent.
Rules for defining centralized scripts
To define a centralized script in the JOBLIB, the following rules must be
considered:
The lines that start with //* OPC, //*%OPC, and //*>OPC are used for the
variable substitution and the automatic recovery. They are removed before the
script is downloaded on the distributed agent.
Each line starts from column 1 to column 80.
Backslash () at column 80 is the character of continuation.
Blanks at the end of the line are automatically removed.
These rules guarantee the compatibility with the old tracker agent jobs.
Note: The SCRIPTLIB follows the TSO rules, so the rules to define a
centralized script in the JOBLIB differ from those to define the JOBSCR and
JOBCMD of a non-centralized script.
For more details, refer to 4.5.2, “Definition of centralized scripts” on page 219.
A new data set, EQQTWSCS, has been introduced with this new release to
facilitate centralized scripting. EQQTWSCS is a PDSE data set used to
temporarily store a script when it is downloaded from the JOBLIB data set to the
agent for its submission.
User interface changes for the centralized script
Centralized Scripting required changes to several Tivoli Workload Scheduler for
z/OS interfaces such as ISPF, Job Scheduling Console, and a number of batch
interfaces. In this section, we cover the changes to the user interfaces ISPF and
Job Scheduling Console.
In ISPF, a new job option has been added to specify whether an operation that
runs on a fault tolerant workstation has a centralized script. It can value Y/N:
Y if the job has the script stored centrally in the JOBLIB.
Chapter 1. Introduction 11
28. N if the script is stored locally and the job has the job definition in the
SCRIPTLIB.
In a database, the value of this new job option can be modified during the
add/modify of an application or operation. It can be set for every operation,
without workstation checking. When a new operation is created, the default value
for this option is N. For non-FTW (Fault Tolerant Workstation) operations, the
value of the option is automatically changed to Y during Daily Plan or when
exiting the Modify an occurrence or Create an occurrence dialog.
The new Centralized Script option was added for operations in the Application
Description database and is always editable (Figure 1-3).
Figure 1-3 CENTRALIZED SCRIPT option in the AD dialog
The Centralized Script option also has been added for operations in the current
plan. It is editable only when adding a new operation. It can be browsed when
modifying an operation (Figure 1-4 on page 13).
12 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
29. Figure 1-4 CENTRALIZED SCRIPT option in the CP dialog
Similarly, Centralized Script has been added in the Job Scheduling Console
dialog for creating an FTW task, as shown in Figure 1-5.
Figure 1-5 Centralized Script option in the JSC dialog
Chapter 1. Introduction 13
30. Considerations when using centralized scripts
Using centralized scripts can ease the migration path from OPC tracker agents to
FTAs. It is also easier to maintain the centralized scripts because they are kept in
a central location, but these benefits come with some limitations. When deciding
whether to store the script locally or centrally, take into consideration that:
The script must be downloaded every time a job runs. There is no caching
mechanism on the FTA. The script is discarded as soon as the job completes.
A rerun of a centralized job causes the script to be downloaded again.
There is a reduction in the fault tolerance, because the centralized
dependency can be released only by the controller.
Recovery for non-centralized jobs
In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job
definition to specify recovery options and actions. Recovery is performed
automatically on the FTA in case of an abend. By this feature, it is now possible
to use the recovery for jobs running in a end-to-end network as implemented in
IBM Tivoli Workload Scheduler distributed.
Defining recovery for non-centralized jobs
To activate the recovery for a non-centralized job, you have to specify the
RECOVERY statement in the job member in the scriptlib.
It is possible to specify one or both of the following recovery actions:
A recovery job (JOBCMD or JOBSCR keywords)
A recovery prompt (MESSAGE keyword)
The recovery actions must be followed by one of the recovery options (the
OPTION keyword), stop, continue, or rerun. The default is stop with no recovery
job and no recovery prompt.
Figure 1-6 on page 15 shows the syntax of the RECOVERY statement.
14 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
31. Figure 1-6 Syntax of the RECOVERY statement
The keywords JOBUSR, JOBWS, INTRACTV, and RCCONDSUC can be used
only if you have defined a recovery job using the JOBSCR or JOBCMD keyword.
You cannot use the recovery prompt if you specify the recovery STOP option
without using a recovery job. Having the OPTION(RERUN) and no recovery
prompt specified could cause a loop. To prevent this situation, after a failed rerun
of the job, a recovery prompt message is shown automatically.
Note: The RECOVERY statement is ignored if it is used with a job that runs a
centralized script.
For more details, refer to 4.5.3, “Definition of non-centralized scripts” on
page 221.
Recovery actions available
The following table describes the recovery actions that can be taken against a job
that ended in error (and not failed). Note that JobP is the principal job, while JobR
is the recovery job.
Table 1-1 The recovery actions taken against a job ended in error
ACTION/OPTION Stop Continue Rerun
No recovery JobP remains in error. JobP is completed. Rerun JobP.
prompt/No
recovery job
A recovery Issue the prompt. JobP Issue recovery prompt. If Issue the prompt. If 'no'
prompt/No remains in error. “yes” reply, JobP is reply, JobP remains in
recovery job completed. If 'no' reply, error. If “yes” reply, rerun
JobP remains in error. JobP.
Chapter 1. Introduction 15
32. ACTION/OPTION Stop Continue Rerun
No recovery Launch JobR. Launch JobR. JobP is Launch JobR.
prompt/A recovery If it is successful, JobP completed. If it is successful, rerun
job is completed; otherwise JobP; otherwise JobP
JobP remains in error. remains in error.
A recovery Issue the prompt. If 'no' Issue the prompt. Issue the prompt. If 'no'
prompt/A recovery reply, JobP remains in If 'no' reply, JobP remains reply, JobP remains in
job error. If “yes” reply: in error. error. If “yes” reply:
Launch JobR. If “yes” reply: Launch JobR.
If it is successful, Launch JobR. If it is successful,
JobP is completed; JobP is completed. rerun JobP; otherwise
otherwise JobP JobP remains in error.
remains in error.
Job Instance Recovery Information panels
Figure 1-7 shows the Job Scheduling Console Job Instance Recovery
Information panel. You can browse the job log of the recovery job, and you can
reply prompt. Note the fields in the Job Scheduling Console panel and JOBREC
parameters mapping.
Figure 1-7 JSC and JOBREC parameters mapping
16 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
33. Also note that you can access the same information from the ISPF panels. From
the Operation list in MCP (5.3), if the operation is abended and the RECOVERY
statement has been used, you can use the row command RI (Recovery
Information) to display the new panel EQQRINP as shown in Figure 1-8.
Figure 1-8 EQQRINP ISPF panel
Variable substitution for non-centralized jobs
In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job
definition to specify Variable Substitution Directives. This provides the capability
to use the variable substitution for jobs running in an end-to-end network without
using the centralized script solution.
Tivoli Workload Scheduler for z/OS–supplied variables and user-defined
variables (defined using a table) are supported in this new function. Variables are
substituted when a job is added to Symphony (that is, when the Daily Planning
creates the Symphony or the job is added to the plan using the MCP dialog).
To activate the variable substitution, use the VARSUB statement. The syntax of
the VARSUB statement is given in Figure 1-9 on page 18. Note that it must be
the first one in the SCRPTLIB member containing the job definition. The
VARSUB statement enables you to specify variables when you set a statement
keyword in the job definition.
Chapter 1. Introduction 17
34. Figure 1-9 Syntax of the VARSUB statement
Use the TABLES keyword to identify the variable tables that must be searched
and the search order. In particular:
APPL indicates the application variable table specified in the VARIABLE
TABLE field on the MCP panel, at Occurrence level.
GLOBAL indicates the table defined in the GTABLE keyword of the
OPCOPTS controller and BATCHOPT batch options.
Any non-alphanumeric character, except blanks, can be used as a symbol to
indicate that the characters that follow represent a variable. You can define two
kinds of symbols using the PREFIX or BACKPREF keywords in the VARSUB
statement; it allows you to define simple and compound variables.
For more details, refer to 4.5.3, “Definition of non-centralized scripts” on
page 221, and “Job Tailoring” in IBM Tivoli Workload Scheduler for z/OS
Managing the Workload, SC32-1263.
Return code mapping
In Tivoli Workload Scheduler 8.1, if a fault tolerant job ends with a return code
greater then 0 it is considered as abended.
It should be possible to define whether a job is successful or abended according
to a “success condition” defined at job level. This would supply the NOERROR
functionality, supported only for host jobs.
In Tivoli Workload Scheduler 8.2 for z/OS, a new keyword (RCCONDSUC) has
been added in the job definition to specify the success condition. Tivoli Workload
Scheduler 8.2 for z/OS interfaces show the operations return code.
Customize the JOBREC and the RECOVERY statements in the SCRIPTLIB to
specify a success condition for the job adding the RCCONDSUC keyword. The
success condition expression can contain a combination of comparison and
Boolean expressions.
18 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
35. Comparison expression
Comparison expression specifies the job return codes. The syntax is: (RC
operator operand)-
RC The RC keyword.
Operand An integer between -2147483647 and
2147483647.
Operator Comparison operator Table 1-2 lists the values it can have.
Table 1-2 Operator Comparison operator values
Example Operator Description
RC < a < Less than
RC <= a <= Less than or
equal to
RC> a > Greater than
RC >= a >= Greater than
or equal to
RC = a = Equal to
RC <> a <> Not equal to
Note: Unlike IBM Tivoli Workload Scheduler distributed, the != operator is not
supported to specify a ‘not equal to’ condition.
The successful RC is specified by a logical combination of comparison
expressions. The syntax is: comparison_expression operator
comparison_expression.
For example, you can define a successful job as a job that ends with a return
code less than 3 or equal to 5 as follows:
RCCONDSUC(“(RC<3) OR (RC=5)“)
Note: If you do not specify the RCCONDSUC, only a return code equal to zero
corresponds to a successful condition.
Late job handling
In IBM Tivoli Workload Scheduler 8.2 distributed, a user can define a DEADLINE
time for a job or a job stream. If the job never started or if it is still executing after
the deadline time has passed, Tivoli Workload Scheduler informs the user about
the missed deadline.
Chapter 1. Introduction 19
36. IBM Tivoli Workload Scheduler for z/OS 8.2 now supports this function. In
Version 8.2, the user can specify and modify a deadline time for a job or a job
stream. If the job is running on a fault-tolerant agent, the deadline time is also
stored in the Symphony file, and it is managed locally by the FTA.
In an end-to-end network, the deadline is always defined for operations and
occurrences. Batchman process on USS does not check the deadline to improve
performances.
1.6.3 Security enhancements
This new version includes a number of security enhancements, which are
discussed in this section.
Firewall support in an end-to-end environment
For previous versions of Tivoli Workload Scheduler for z/OS, running the
commands to start or stop a workstation or to get the standard list requires
opening a direct TCP/IP connection between the originator and the destination
nodes. In a firewall environment, this forces users to break the firewall to open a
direct communication path between the Tivoli Workload Scheduler for z/OS
master and each fault-tolerant agent in the network.
In this version, it is now possible to enable the firewall support of Tivoli Workload
Scheduler in an end-to-end environment. If a firewall exists between a
workstation and its domain manager, in order to force the start, stop, and get job
output commands to go through the domain’s hierarchy, it is necessary to set the
FIREWALL option to YES in the CPUREC statement.
Example 1-2 shows a CPUREC definition that enables the firewall support.
Example 1-2 CPUREC definition with firewall support enabled
CPUREC CPUNAME(TWAD)
CPUOS(WNT)
CPUNODE(jsgui)
CPUDOMAIN(maindom)
CPUTYPE(FTA)
FIREWALL(Y)
SSL support
It is now possible to enable the strong authentication and encryption (SSL)
support of IBM Tivoli Workload Scheduler in an end-to-end environment.
You can enable the Tivoli Workload Scheduler processes that run as USS (UNIX
System Services) processes in the Tivoli Workload Scheduler for z/OS address
20 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
37. space to establish SSL authentication between a Tivoli Workload Scheduler for
z/OS master and the underlying IBM Tivoli Workload Scheduler domain
managers.
The authentication mechanism of IBM Tivoli Workload Scheduler is based on the
OpenSSL toolkit, while IBM Tivoli Workload Scheduler for z/OS uses the System
SSL services of z/OS.
To enable SSL authentication for your end-to-end network, you must perform the
following actions:
1. Create as many private keys, certificates, and trusted certification authority
(CA) chains as you plan to use in your network.
Refer to the OS/390 V2R10.0 System SSL Programming Guide and
Reference, SC23-3978, for further details about the SSL protocol.
2. Customize the localopts file on IBM Tivoli Workload Scheduler workstations.
To find how to enable SSL in the IBM Tivoli Workload Scheduler domain
managers, refer to IBM Tivoli Workload Scheduler for z/OS Installation,
SC32-1264.
3. Configure IBM Tivoli Workload Scheduler for z/OS:
– Customize localopts file on USS workdir.
– Customize the TOPOLOGY statement for the OPCMASTER.
– Customize CPUREC statements for every workstation in the net.
Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning,
SC32-1265, for the SSL support in the Tivoli Workload Scheduler for z/OS.
1.7 The terminology used in this book
The IBM Tivoli Workload Scheduler 8.2 suite comprises two somewhat different
software programs, each with its own history and terminology. For this reason,
there are sometimes two different and interchangeable names for the same
thing. Other times, a term used in one context can have a different meaning in
another context. To help clear up this confusion, we now introduce some of the
terms and acronyms that will be used throughout the book. In order to make the
terminology used in this book internally consistent, we adopted a system of
terminology that may be a bit different than that used in the product
documentation. So take a moment to read through this list, even if you are
already familiar with the products.
IBM Tivoli Workload Scheduler 8.2 suite
Chapter 1. Introduction 21
38. The suite of programs that includes IBM Tivoli Workload
Scheduler and IBM Tivoli Workload Scheduler for z/OS.
These programs are used together to make end-to-end
scheduling work. Sometimes called just IBM Tivoli
Workload Scheduler.
IBM Tivoli Workload Scheduler
This is the version of IBM Tivoli Workload Scheduler that
runs on UNIX, OS/400, and Windows operating systems,
as distinguished from IBM Tivoli Workload Scheduler for
z/OS, a somewhat different program. Sometimes called
IBM Tivoli Workload Scheduler Distributed. IBM Tivoli
Workload Scheduler is based on the old Maestro
program.
IBM Tivoli Workload Scheduler for z/OS
This is the version of IBM Tivoli Workload Scheduler that
runs on z/OS, as distinguished from IBM Tivoli Workload
Scheduler (by itself, without the for z/OS specification).
IBM Tivoli Workload Scheduler for z/OS is based on the
old OPC program.
Master The top level of the IBM Tivoli Workload Scheduler or IBM
Tivoli Workload Scheduler for z/OS scheduling network.
Also called the master domain manager, because it is the
domain manager of the MASTERDM (top-level) domain.
Domain manager The agent responsible for handling dependency
resolution for subordinate agents. Essentially an FTA with
a few extra responsibilities.
Fault-tolerant agent An agent that keeps its own local copy of the plan file and
can continue operation even if the connection to the
parent domain manager is lost. Also called an FTA. In IBM
Tivoli Workload Scheduler for z/OS, FTAs are referred to
as fault tolerant workstations.
Scheduling engine An IBM Tivoli Workload Scheduler engine or IBM Tivoli
Workload Scheduler for z/OS engine.
IBM Tivoli Workload Scheduler engine
The part of IBM Tivoli Workload Scheduler that does
actual scheduling work, as distinguished from the other
components that are related primarily to the user interface
(for example, the IBM Tivoli Workload Scheduler
connector). Essentially the part of IBM Tivoli Workload
Scheduler that is descended from the old Maestro
program.
22 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
39. IBM Tivoli Workload Scheduler for z/OS engine
The part of IBM Tivoli Workload Scheduler for z/OS that
does actual scheduling work, as distinguished from the
other components that are related primarily to the user
interface (for example, the IBM Tivoli Workload Scheduler
for z/OS connector). Essentially the controller plus the
server.
IBM Tivoli Workload Scheduler for z/OS controller
The part of the IBM Tivoli Workload Scheduler for z/OS
engine that is based on the old OPC program.
IBM Tivoli Workload Scheduler for z/OS server
The part of IBM Tivoli Workload Scheduler for z/OS that is
based on the UNIX IBM Tivoli Workload Scheduler code.
Runs in UNIX System Services (USS) on the mainframe.
JSC Job Scheduling Console. This is the common graphical
user interface (GUI) to both the IBM Tivoli Workload
Scheduler and IBM Tivoli Workload Scheduler for z/OS
scheduling engines.
Connector A small program that provides an interface between the
common GUI (Job Scheduling Console) and one or more
scheduling engines. The connector translates to and from
the different “languages” used by the different scheduling
engines.
JSS Job Scheduling Services. Essentially a library that is used
by the connectors.
TMF Tivoli Management Framework. Also called just the
Framework.
Chapter 1. Introduction 23
40. 24 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
42. If you are already familiar with both IBM Tivoli Workload Scheduler and IBM
Tivoli Workload Scheduler for z/OS, skip ahead to the third section, in which we
describe how both programs work together when configured as an end-to-end
network.
The Job Scheduling Console, its components, and its architecture, are described
in the last topic. In this topic, we describe the different components that are used
to establish a Job Scheduling Console environment.
26 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
43. 2.1 IBM Tivoli Workload Scheduler for z/OS architecture
IBM Tivoli Workload Scheduler for z/OS expands the scope for automating your
data processing operations. It plans and automatically schedules the production
workload. From a single point of control, it drives and controls the workload
processing at both local and remote sites. By using IBM Tivoli Workload
Scheduler for z/OS to increase automation, you use your data processing
resources more efficiently, have more control over your data processing assets,
and manage your production workload processing better.
IBM Tivoli Workload Scheduler for z/OS is composed of three major features:
The IBM Tivoli Workload Scheduler for z/OS agent feature
The agent is the base product in IBM Tivoli Workload Scheduler for z/OS. The
agent is also called a tracker. It must run on every operating system in your
z/OS complex on which IBM Tivoli Workload Scheduler for z/OS controlled
work runs. The agent records details of job starts and passes that information
to the engine, which updates the plan with statuses.
The IBM Tivoli Workload Scheduler for z/OS engine feature
One z/OS operating system in your complex is designated the controlling
system and it runs the engine. The engine is also called the controller. Only
one engine feature is required, even when you want to establish standby
engines on other z/OS systems in a sysplex.
The engine manages the databases and the plans and causes the work to be
submitted at the appropriate time and at the appropriate system in your z/OS
sysplex or on another system in a connected z/OS sysplex or z/OS system.
The IBM Tivoli Workload Scheduler for z/OS end-to-end feature
This feature makes it possible for the IBM Tivoli Workload Scheduler for z/OS
engine to manage a production workload in a Tivoli Workload Scheduler
distributed environment. You can schedule, control, and monitor jobs in Tivoli
Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with
this feature.
The end-to-end feature is covered in 2.3, “End-to-end scheduling
architecture” on page 59.
The workload on other operating environments can also be controlled with the
open interfaces that are provided with Tivoli Workload Scheduler for z/OS.
Sample programs using TCP/IP or a Network Job Entry/Remote Spooling
Communication Subsystem (NJE/RSCS) combination show you how you can
control the workload on environments that at present have no scheduling
feature.
Chapter 2. End-to-end scheduling architecture 27
44. In addition to these major parts, the IBM Tivoli Workload Scheduler for z/OS
product also contains the IBM Tivoli Workload Scheduler for z/OS connector and
the Job Scheduling Console (JSC).
IBM Tivoli Workload Scheduler for z/OS connector
Maps the Job Scheduling Console commands to the IBM Tivoli Workload
Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS connector
requires that the Tivoli Management Framework be configured for a Tivoli
server or Tivoli managed node.
Job Scheduling Console
A Java-based graphical user interface (GUI) for the IBM Tivoli Workload
Scheduler suite.
The Job Scheduling Console runs on any machine from which you want to
manage Tivoli Workload Scheduler for z/OS engine plan and database
objects. It provides, through the IBM Tivoli Workload Scheduler for z/OS
connector, functionality similar to the IBM Tivoli Workload Scheduler for z/OS
legacy ISPF interface. You can use the Job Scheduling Console from any
machine as long as it has a TCP/IP link with the machine running the IBM
Tivoli Workload Scheduler for z/OS connector.
The same Job Scheduling Console can be used for Tivoli Workload
Scheduler and Tivoli Workload Scheduler for z/OS.
In the next topics, we provide an overview of IBM Tivoli Workload Scheduler for
z/OS configuration, the architecture, and the terminology used in Tivoli Workload
Scheduler for z/OS.
2.1.1 Tivoli Workload Scheduler for z/OS configuration
IBM Tivoli Workload Scheduler for z/OS supports many configuration options
using a variety of communication methods:
The controlling system (the controller or engine)
Controlled z/OS systems
Remote panels and program interface applications
Job Scheduling Console
Scheduling jobs that are in a distributed environment using Tivoli Workload
Scheduler (described in 2.3, “End-to-end scheduling architecture” on
page 59)
28 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
45. The controlling system
The controlling system requires both the agent and the engine. One controlling
system can manage the production workload across all of your operating
environments.
The engine is the focal point of control and information. It contains the controlling
functions, the dialogs, the databases, the plans, and the scheduler’s own batch
programs for housekeeping and so forth. Only one engine is required to control
the entire installation, including local and remote systems.
Because IBM Tivoli Workload Scheduler for z/OS provides a single point of
control for your production workload, it is important to make this system
redundant. This minimizes the risk of having any outages in your production
workload in case the engine or the system with the engine fails. To make the
engine redundant, one can start backup engines (hot standby engines) on other
systems in the same sysplex as the active engine. If the active engine or the
controlling system fails, Tivoli Workload Scheduler for z/OS can automatically
transfer the controlling functions to a backup system within a Parallel Sysplex.
Through Cross Coupling Facility (XCF), IBM Tivoli Workload Scheduler for z/OS
can automatically maintain production workload processing during system
failures. The standby engine can be started on several z/OS systems in the
sysplex.
Figure 2-1 on page 30 shows an active engine with two standby engines running
in one sysplex. When an engine is started on a system in the sysplex, it will
check whether there is already an active engine in the sysplex. It there are no
active engines, it will be an active engine. If there is an active engine, it will be a
standby engine. The engine in Figure 2-1 on page 30 has connections to eight
agents: three in the sysplex, two remote, and three in another sysplex. The
agents on the remote systems and in the other sysplexes are connected to the
active engine via ACF/VTAM® connections.
Chapter 2. End-to-end scheduling architecture 29
46. Agent Agent
Standby Standby
Engine Engine
z/OS
SYSPLEX
Agent
Active
Engine
Remote VTAM VTAM Remote
Agent Agent
Remote Remote
Agent Agent
z/OS
SYSPLEX
Remote
Agent
Figure 2-1 Two sysplex environments and stand-alone systems
Controlled z/OS systems
An agent is required for every controlled z/OS system in a configuration. This
includes, for example, locally controlled systems within shared DASD or sysplex
configurations.
The agent runs as a z/OS subsystem and interfaces with the operating system
through JES2 (Job Execution Subsystem) or JES3, and SMF (System
Management Facility), using the subsystem interface and the operating system
exits. The agent monitors and logs the status of work, and passes the status
information to the engine via shared DASD, XCF, or ACF/VTAM.
You can exploit z/OS and the cross-system coupling facility (XCF) to connect
your local z/OS systems. Rather than being passed to the controlling system via
shared DASD, work status information is passed directly via XCF connections.
XCF enables you to exploit all production-workload-restart facilities and its hot
standby function in Tivoli Workload Scheduler for z/OS.
Remote systems
The agent on a remote z/OS system passes status information about the
production work in progress to the engine on the controlling system. All
communication between Tivoli Workload Scheduler for z/OS subsystems on the
controlling and remote systems is done via ACF/VTAM.
30 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
47. Tivoli Workload Scheduler for z/OS enables you to link remote systems using
ACF/VTAM networks. Remote systems are frequently used locally (on premises)
to reduce the complexity of the data processing installation.
Remote panels and program interface applications
ISPF panels and program interface (PIF) applications can run in a different z/OS
system than the one where the active engine is running. Dialogs and PIF
applications send requests to and receive data from a Tivoli Workload Scheduler
for z/OS server that is running on the same z/OS system as the target engine, via
advanced program-to-program communications (APPC). The APPC server
communicates with the active engine to perform the requested actions.
Using an APPC server for ISPF panels and PIF gives the user the freedom to run
ISPF panels and PIF on any system in a z/OS enterprise, as long as this system
has advanced program-to-program communication with the system where the
active engine is started. This also means that you do not have to make sure that
your PIF jobs always run on the z/OS system where the active engine is started.
Furthermore, using the APPC server makes it seamless for panel users and PIF
programs if the engine is moved to its backup engine.
The APPC server is a separate address space, started and stopped either
automatically by the engine, or by the user via the z/OS start command. There
can be more than one server for an engine. If the dialogs or the PIF applications
run on the same z/OS system as the target engine, the server may not be
involved. As shown in Figure 2-2 on page 32, it is possible to run the IBM Tivoli
Workload Scheduler for z/OS dialogs and PIF applications from any system as
long as the system has an ACF/VTAM connection to the APPC server.
Chapter 2. End-to-end scheduling architecture 31