Вы находитесь на странице: 1из 22

Bicol University

College of Science
Legazpi City

General Overview of the Influential Operating System

IBM OS/360
In partial fulfillment with the requirements in
CS 23 – Operating System Concepts II

Angelica Carla Cells


Jose Udel Dela Cruz Jr.
Genevieve Oca
Raiza Oropesa
Seigi Takada
Errol Zuñiga Jr.

BSCS III - A

Noli Lucila Jr.


Professor

0
TABLE OF CONTENTS

IBM OS/360 Page

Table of Contents 1
History 2
Design Principles 3
Kernel Modules 5
Process Management 7
Scheduling 11
Memory Management 14
File System 15
I/O 16
IPC 17
Teleprocessing (Network Structure) 18
Security 20
References 21

1
IBM SYSTEM/360 HISTORY

On April 7, 1964, the face of computing changed. Before this time, most computers were
designed for a specific purpose: Mathematics, Science, Engineering, Computing,
Communications, and Storage. The System 360 was one of the first general purpose mainframe
computers.

IBM changed this in April 1964. They announced IBM-360 Family. Some believe that the term
"360" was coined by IBM to express their ambition to encompass the customers' needs at a 360-
degree angle. Anyway, this ambition was not fulfilled, since afterwards the numbers 370 and 390
were used... (Of course, they relate -approximately - to the years when the new S/370 and S/390
technologies took place!)

The 360 family was intended to have 3 operating systems:


 DOS/360 for the small machines.
 OS/360 for the midrange and high end.
 TSS/360 for Time-Sharing Multi-User systems.

OS/360, an operating system developed to support the new generation and architecture of
System/360hardware -hardware capable of supporting both commercial and scientific
applications. Prior to System/360, those applications ran on separate lines of hardware.

OS/360. The name was intended to convey that this was ONE system that was suitable for the
whole family of machines.
In reality, there were 3 versions from the very beginning:
 OS/360-PCP (Principal Control Program)
 OS/360-MVT (Multi-Programming with a Variable number of Tasks)
 OS/360-MFT (Multi-programming with a fixed number of Tasks)

OS/360-PCP (Principal Control Program) was a very simple version which could only run one
program at a time. Inside IBM, this was used to develop the tools that would eventually run at a
time. If a partition was idle, the memory was not available to programs running in the other
partitions. The advantage was that the system was very stable, and in most data centers, the
workload could be structured into small jobs and big jobs.

OS/360-MVT (Multi-Programming with a Variable number of Tasks) allowed partitions to be


created and deleted on the fly as needed. If there was memory available, it would search the job
queue for a job that would fit in the space available, and then create a partition of the size that the
job requested. The advantage is obvious; the disadvantage was that after a while, small jobs with
long running-times would be sitting in the middle of memory, leaving open spaces before and
after that were too small to fit any of the jobs that were waiting. To make MVT work, you needed
to install a package called the HASP job scheduler, which managed the queue, tagged each job
with one of several fixed sizes, and released only one job at a time into the MVT scheduler's
queue for each of a predefined number of "slots". The result was very much like MFT, but one
could make room for a very large job to take the whole machine when needed.

After the virtual addressing hardware became available, these systems were renamed: MFT
became OS/VS1 and MVT becameOS/VS2. Both versions used the addressing hardware to
combine leftover are as spread throughout memory into a logically contiguous area for another
job to run in.
A further enhancement toOS/VS2 became MVS (Multiple Virtual Spaces).
on the larger OS/360 versions. In the real world, machines that were too small to run the other
OS versions always ran DOS.

2
OS/360-MFT (Multi-programming with a fixed number of Tasks) could run several programs at
once, but you had to divide the memory into a set of partitions, and each partition could run one
job

DESIGN PRINCIPLES
 Donald Michael Ludlow was the principle design and developer and later
manager of the Input-Output Supervisor (IOS) component of OS/360.

 OS/360 was developed in a world where different computing environments


required different lines of hardware. IBM had the vision to merge these separate lines into
one product and developed a new way of thinking about the commonalities among
processes they were previously thought to irreconcilable.

 The name “OS/360” refers to IBM's intention that this system be able to meet all
of the needs of its users, “encompassing customer needs” 360 degrees. In hindsight, of
course, this was not possible.

 IBM saw OS/360 as being what it called a “second generation” operating system.
The goals of this new generation of operating system was to produce a General Purpose
System that could satisfy the data processing needs of the majority of users at medium
and large scale OS/360 installation; and “to accommodate an environment of diverse
applications and operating modes” as its primary objective. It would enable flexibility of
use. The installation could run large, traditional batch jobs as well as jobs whose function
was to allow terminals access to a data set. Secondary Design Goals include:

 Throughput was increased by letting operators setup for jobs that are
waiting in the queue. It permitted concurrent use of system resources.

 Programmer productivity was increased by support of a large number


of source languages.

 Response time. As this system sought to meet the needs of both batch
job environments and real time environments, response time was not
defined as a simple element. Times ranged wildly between different
types of systems. In a batch environment turn around time was defined
as the time between when a user handed over a deck of punched cards
to a computer operator and the time the printout was ready to be picked
up. At the other end of the spectrum, in a real-time environment,
responses were measured down to the millisecond. Priority options and
control program options allowed shops to customize the system as
needed. The machine was highly configurable, both in the software and
hardware spheres, both at initial installation and post-installation.

 Adaptability. To enable the flexibility and adaptability, programs could


be written to be device independent. The system enabled shops to
reduce the issues previously associated with device changes, where
entire sections of code had to be re-written.

 Expandability. The system had a higher degree of expandability that


previously implemented, with the ability to add new hardware,
applications and programs.

 In designing a method of control for a second-generation system, two opposing


viewpoints must be reconciled. In the first generation operating systems, the point of view

3
was that the machine executed an incoming stream of programs; each program and its
associated input data corresponded to one application or problem. In the first-generation
real-time systems, on the other hand, the point of view was that incoming pieces of data
were routed to one of a number of processing programs. These attitudes led to quite
different system structures; it was not recognized that these points of view were matters
of degree rather than kind. The basic consideration, however, is one of emphasis:
programs are used to process data in both cases. Because it is the combination of
program and data that marks a unit of work for control purposes, OS/360 takes such a
combination as the distinguishing property of a task.

 In laying down conceptual groundwork, the OS/360 designers have employed the
notion of multitask operation wherein, at any time, a number of tasks may contend for
and employ system resources. The term multiprogramming is ordinarily used for the
case in which one CPU is shared by a number of tasks, the term multiprocessing, for
the case in which a separate task is assigned to each of several CPU’S. The multitask
operation, as a concept, gives recognition to both terms. If its work is structured entirely
in the form of tasks, a job may lend itself without change to either environment.

 The 360 system, therefore, was the first system capable of supporting both
commercial and scientific applications, previously separated to different lines of
hardware. It was also the first multiprogramming operating system. Thus, it was the first
general purpose mainframe computer.

 The System/360 introduced a number of industry standards to the marketplace,


such as:

 Introduction of the 8-bit byte, over the prevalent 6-bit standard, made it
easier to handle both business information processing and scientific data
processing. The era of separate machines for business and science was
over.

 EBCDIC .The System/360 was originally to use the ASCII character set,
and IBM was a major advocate of the ASCII standardization process.
However, IBM did not have enough ASCII-based peripherals ready for
the System/360's launch, and decided instead on EBCDIC, a derivation
of the earlier Binary-Coded Decimal (BCD) system. There was also
pressure from large commercial and government customers who had
massive BCD files, which could not be converted context-free into ASCII.
EBCDIC had been used in some earlier systems, but the System/360
turned EBCDIC into an industry standard for mainframe computing due
to its own success and the subsequent need to maintain backward
compatibility.

 Byte-addressable memory (as opposed to word-addressable memory).

 32-bit words.

 Two's complement arithmetic.

 Commercial use of microcoded CPUs.

 IBM Floating Point Architecture (until superseded by the IEEE 754-


1985 floating-point standard, 20 years later).

4
 System/360 could handle logic instructions as well as three types of
arithmetic instructions (fixed-point binary, fixed-point decimal and
floating-point hexadecimal).

 The system’s architectural unity helped lower customer costs,


improved computing efficiency and, quite frankly, took a lot of the
mystery out of the art of computing.

KERNEL MODULES

To create a General Purpose System, IBM decided to allow each customer to generate
the kind of operating system they required by the process of modular construction (creating an
operating system by modules where they can be assembled and linked together in many
combinations to form a unique operating system and can be replaced independently of one
another). Modules have three types: Required, Alternative and Optional. The figure below
illustrates this.

5
The System/360’s standardized input and output interfaces made it possible for
customers to tailor systems to their specific needs.

REQUIRED PARTS
As seen by a user, OS/360 will consist of a set of language translators, a set of service
programs, and a control program. Moreover, from the viewpoint of system management, a
SYSTEM/360 installation may look upon its own application programs as an integral part of the
operating system.

TRANSLATORS
A variety of translators are being provided for FORTRAN, COBOL, and RPGL (a
Report Program Generator Language). Also to be provided is a translator for PL/I, a new
generalized language. The programmer who chooses to employ the assembler language
can take advantage of macroinstructions; the assembler program is supplemented by a
macro generator that produces a suitable set of assembly language statements for each
macroinstruction in the source program.

SERVICE PROGRAMS
Groups of individually translated programs can be combined into a single
executable program by a linkage editor. The linkage editor makes it possible to change a
program without re-translating more than the affected segment of the program. Where a
program is too large for the available main-storage area, the function of handling program
segments and overlays falls to the linkage editor.
The sort/merge is a generalized program that can arrange the fixed- or variable-
length records of a data set into ascending or descending order. The process can either
employ magnetic-tape or direct-access storage devices for input, output, and
intermediate storage. The program is adaptable in the sense that it takes advantage of all
the input/output resources allocated to it by the control program. The sort/merge can be
used independently of other programs or can be invoked by them directly; it can also be
used via COBOL and PL/I.
Included in the service programs are routines for editing, arranging, and updating
the contents of the library; revising the index structure of the library catalog; printing an
inventory list of the catalog; and moving and editing data from one storage medium to
another.

CONTROL PROGRAM
Roughly speaking, the control program subdivides into master scheduler, job
scheduler, and supervisor. Central control lodges in the supervisor, which has
responsibility for the storage allocation, task sequencing, and input/output monitoring
functions. The master scheduler handles all communications to and from the operator,
whereas the job scheduler is primarily concerned with job-stream analysis, input/output
device allocation and setup, and job initiation and termination.

 SUPERVISOR
Among the activities performed by the supervisor are the following:
 Allocating main storage
 Loading programs into main storage
 Controlling the concurrent execution of tasks
 Providing clocking services
 Attempting recoveries from exceptional conditions
 Logging errors
 Providing summary information on facility usage
 Issuing and monitoring input/output operations
The supervisor ordinarily gains control of the central processing unit
by way of an interruption. Such an interruption may stem from an explicit
request for services, or it may be implicit in S Y S T E M ~ O

6
conventions, such as in the case of an interruption that occurs at the
completion of an input/output operation. Normally, a number of data-
access routines required by the data management function are
coordinated with the supervisor. The access routines available at any
given time are determined by the requirements of the user’s program, the
structure of the given data sets, and the types of input/output devices in
use.

 JOB SCHEDULER
As the basic independent unit of work, a job consists of one or more
steps. Inasmuch as each job step results in the execution of a major
program, the system formalizes each job step as a task, which may then
be inserted into the task queue by the initiator terminator (a functional
element of the job scheduler). In some cases, the output of one step is
passed on as the input to another. For example, three successive job
steps might involve file maintenance, output sorting, and report
tabulation.
 The primary activities of the job scheduler are as follows:
 Reading job definitions from source inputs
 Allocating input/output devices
 Initiating program execution for each job step
 Writing job outputs
In its most general form, the job scheduler allows more than one job
to be processed concurrently. On the basis of job priorities and resource
availabilities, the job scheduler can modify the order in which jobs are
processed. Jobs can be read from several input devices and results can
be recorded on several output devices-the reading and recording being
performed concurrently with internal processing.

 MASTER SCHEDULER
The master scheduler serves as a communication control link
between the operator and the system. By command, the operator can
alert the system to a change in the status of an input/output unit, alter the
operation of the system, and request status information. The master
scheduler is also used by the operator to alert the job scheduler of job
sources and to initiate the reading or processing of jobs.

The control program as a whole performs three main functions: job management,
task management, and data management.

Process Management
Any work of processing data is performed by the computing system and direction of the
supervisor, but before that it must be scheduled and initiated by either the master scheduler or
the job scheduler.

Master Scheduler
– used by the operator to schedule and initiate the work performed by the job scheduler.

Job Scheduler
– used to read, interpret, schedule, initiate, record output for, and terminate the steps of a
series of jobs that are defined and submitted for processing by the programming staff.
– designed to process a continuous series of jobs without unnecessary delays between
one job or job step and another.

7
Master and job schedulers for MFT(Multiprogramming with Fixed number of Tasks) and
MVT(Multiprogramming with Variable number of Tasks) configurations are designed to take
advantage of directing and controlling the performance of more than one data processing task at
a time, and by so doing increase the performance of the system as a whole. This is accomplished
in two major ways: by scheduling and initiating the performance of more than one job at a time,
and by performing job support tasks concurrently with the jobs.

Non-Stop Job Processing

Job
 Is the major unit of work performed by the operating system
 Is defined by a series of job control language statements coded by a programmer
(Fig. 1)
 Consists of one or more steps which are defined by the programmer and arranged
in the order in which they are to be performed.
Job Statement
 Provides the job definition containing information concerning the job, such as its
name and priority.
EXEC Statement
 Defines the job step and contains information such as the name of the program to
be executed to perform the job step and the amount of main storage space required
to execute the program
 The specified program may be a problem state program created by IBM, such as
language translator, or it may be a problem state program created by the user of
the system, such as a payroll program.
DD (data definition) Statement
 Identifies and defines the set of data that is processed during a job step

If a series of job step


definitions are to be used
repeatedly with little o no
change, the programmer can
store and catalog it them in a
procedure library maintained
in direct access storage by the
control program. Thereafter,
using single job and job
statements in an input stream
he can direct the job scheduler
to pick up the job step
definitions from the procedure
library. If necessary, the same
job statement can override
specifications in the job step
definitions picked up from the
procedure library. Using this
feature, a system programmer
can predefine standard types
of jobs that are commonly
performed at an installation.
By doing this, the system
programmer can eliminate the
need for applications

8
programmers to redefine standard jobs each time they are performed. He can also help to ensure
that the system is used efficiently and consistently.

Multiple-Job Processing

Jobs are processed by its steps. Steps in a single job may be dependent upon one
another, so they must be processed sequentially and not in jumbled order. Steps in different jobs
are independent to one
another. Therefore, they
can be performed
concurrently. In most
computing systems, jobs
and job steps are
performed one at a time
in a fixed sequential order
as shown in part A of Fig.
2. No matter how small
the job or how large the
system all of the
resources of the system
are tied up until the step
is completed.
With an MFT or
MVT control program,
these same jobs can be
performed either
sequentially (as shown in
part A of Fig. 2) or, if
enough resources are
available, concurrently as shown in part B of Fig. 2. In the latter instance, any on job may take
longer to perform because it may be temporarily delayed from time to time awaiting the resource
currently being used to perform other jobs. However, because the resources shared among
several jobs, the rate at which the jobs as whole are performed is significantly increased, resulting
in a greater total throughput.

Job Priorities

So far as job management is concerned, the main difference between MFT and MVT
control program has to do with the way in which priority is assigned and main storage space is
allocated.
At an MFT or MVT installation, each job that is submitted for processing can be assigned
a specific priority relative to other jobs. It can also be assigned to any one of several classes of
jobs. When the job definitions are read by the reader/interpreter they are placed in the input work
queue in accordance with their assigned class and priority. A separate input queue is maintained
for each class assigned to the jobs. Within each input queue, the job definitions are arranged in
the order of their priority. Output data produced during a job step can be assigned by the
programmer to any one of up to 36 different data output classes defined at the installation. When
an output writer task is started it can be assigned to process from one to eight different classes of
output. A particular output class may represent such things as the priority of the data, the type of
device that may be used to record it, or the location or department to which it is to be sent.
In an MFT installation, any main storage space not reserved for use by the control
program is logically divided as specified by the operator into partitions of various sizes. Each
partition is assigned by the operator for use in performing either a reader/interpreter or output
writer task or a particular class of jobs. The priority of a job step task is determined by the
partition to which it is assigned. Each partition is assigned by the operator to one, two, or three

9
classes of jobs. Whenever a new job is initiated it is directed to (or is allocated) a partition that
was assigned to its job class. The operator can change the job class or classes to which a
partition is assigned, and thereby control the mixture of jobs. In addition, since each partition is
assigned a specific priority, he can also control the priority assigned to each class of jobs.
In an MVT installation, any main storage space not reserved for the control program
serves as a pool of storage from which a region is dynamically allocated by the control program to
each job step or job support task as it is initiated. The size of the region to be allocated to each
job step is specified by the programmer in the job or job step definition. The priority of a job is
also specified by the programmer. When an initiator/terminator task is started by the operator, it
can be assigned to initiate jobs from one through eight input queues. By classifying jobs and
assigning initiator/terminators to initiate specific classes of jobs, it is possible to control the
mixture of concurrent jobs; thus, jobs with complementary resource requirements can be
performed concurrently. For example, one initiator/terminator is assigned to a class requiring little
CPU time and a great deal of I/O.

Multiprocessing

Multiprocessing is a technique whereby the work of processing data is shared among two
or more interconnected central processing units.

CPU-to-CPU COMMUNICATION
Some ways of communicating one CPU to another:
 At one extreme, communication may be represented by a few control signal lines
that are userd broadly synchronize the operation of the CPU with that of
another.
 By using a channel-to-channel adapter.

ADVANTAGES OF MULTIPROCESSING
 Increased availability
 Increased production capacity
 More efficient use of resources
 Data sharing
 Online problem solving
 Inquiry and transaction processing

10
SCHEDULING
control program – supervisor, master & job scheduler

-efficiently schedule, initiate & supervise the work by the computing system

master scheduler – controls the overall operating of the computing system-operating system
combination

job scheduler – enters job definitions into the computing system, schedules & then initiates the
performance of work under control of the supervisor

* The job scheduler is responsible for job management functions. It permits either sequential
FIFO scheduling or non-preemptive priority scheduling. Priority scheduling is accomplished via an
input work queue. The queue can react to job priorities which are set by user, consisting of a
number from 0-14 in increasing importance.

* The job & master schedulers perform a vital role in scheduling & supervising the performance of
work by the computing system.

* The control & the direction of processing data is in the supervisor.

* Before any work can be performed by the system, it must be scheduled & initiated by either the
master scheduler or job scheduler.

* In the MFT & MVT configurations of the OS, the supervisor is capable of directing & controlling
the performance of more than one data processing task at a time.

2 configurations of the OS

1. multiprogramming w/ fixed number of tasks (MFT)


2. multiprogramming w/ variable number of tasks (MVT)

Non –Stop Job Processing


job – major unit of work done performed by the OS

11
Job Characteristics
1. defined by series of job statements
2. consists of one or more steps
3. Individual job definitions can be placed one behind another

*Any set of data that is processed during a job step must be identified & defined within the
definition of the job step using a DD (data definition) statement.
* If a series of job step definitions are to be used repeatedly w/ little or no change, a programmer
can store & catalog them in a procedure library maintained in direct access storage by the control
program.
Multiple Job Processing
* The steps of a data processing job are logically related to one another to produce a specific end
result.
Example of a Job consisting of three steps
1. Translating a source program into an object program
2. Linkage editing the object program to produce a program suitable for loading into main
storage.
3. Loading & executing the program.

*With an MFT or MVT control program, these same jobs can be performed either sequentially
(part A of the figure) or, if enough resources are available, concurrently as shown in part B of the
figure.
*A set of data in direct access storage can be shared concurrently among several jobs provided it
is not changed in any way the jobs that are sharing it.
*Multiple-job processing is particularly suited to data processing installations with a high volume
of work & a large number of resources.

12
Concurrent Job Support Tasks
* Job definitions & any input data that accompanies them in an input stream, are usually
submitted for processing in the form of punched cards.
* Output data ends up in printed or punched card form.
Steps Performed When Processing Jobs
1. The jobs, in punched card form are normally arranged in priority order.
2. After enough jobs have been accumulated to form a batch, they are transcribed to
magnetic tape.
3. The batch of jobs on the tape is manually scheduled and then processed on the central
computing system.
4. After a batch of output data has been recorded on a tape by the central computing
system, it is manually scheduled & converted to printed or punched card form or a
combination of the two.
5. The printed and punched card output is manually sorted into various classes and
distributed to the individuals that submitted the jobs.

*To avoid such problems at MFT or MVT installations, operations such as reading job & data
cards & printing job output data, are performed by the control program as separate tasks,
concurrently with other work.
* At MFT & MVT installations the control program can read job definitions & data from one or
more job input streams and record job output data on one or more output devices, while initiating
& controlling the performance of one or more jobs.
The MFT & MVT Job and Master Schedulers
* Job & Master Schedulers control the concurrent processing of job & job support tasks.
MFT & MVT job schedulers are divided into:

13
1. reader/interpreter – reads jobs & job step definitions from an input stream & places them
in the input work queue

2. initiator/terminator - selects and initiates the job


3. output writer – reads data from the output work queue & records it on an output device

MFT & MVT master scheduler – serves as a two-way communications link b/w operator &
system; used to relay messages from system to operator, to execute commands like starting &
stopping job scheduling tasks, log operational info, monitor & control the progress of work, etc.
*Any reader/interpreter & output writer tasks that are to be performed at an installation are
defined in much the same way as a single step job is defined. Specifications can be respecified
when the operator starts the task.
MFT & MVT – 15 initiator/terminator tasks to control 15 concurrent jobs
MVT – any number of reader/interpreter & output writers
MFT – 3 reader/interpreter & 36 output writer tasks

MEMORY MANAGEMENT OF IBM OS/360


In order to present some ways of managing memory, we first need a model on which to define
basic objects and concepts. Data and programs are stored, usually in binary form, in a memory
subsystem. On early computers, the memory subsystem was a single main memory. Computers
became faster and computer problems larger, but a single main memory that was both fast
enough and large enough had not really been available. This led to a memory subsystem
organization consisting of a set of devices, typically consisting of a small fast main memory for
the immediate needs of the processor and some larger, slower devices holding data not expected
to be required soon. These devices are usually arranged in a hierarchy and are interconnected
so that data can be moved about independent of the processing of other data.
Thus our simple model, or abstraction, consists of a processor and a memory subsystem, with
information flowing between them. The processor works cyclically, and at the completion of
almost every cycle, a specified piece of information is sent to or requested of the memory
subsystem. The memory subsystem then accomplishes the task with some delay.

Dynamic real memory management OS/360

Prior to DOS/360 and OS/360 magnetic tape was the principal medium for secondary storage,
but its sequential nature limited its possibilities for memory management. The arrival of disk
storage as an economical second-level storage with good random access capabilities was the
catalyst for a new approach to memory management.

In the design of OS360 it was decided not only to support compile-time and load time binding but,
by taking advantage of random access disk storage, to provide execution-time binding as well. In
OS360 a set of services provide the link between the application program and the program
modules on direct access storage.

Execution-time binding is a particularly important feature of OS/360 for two reasons: first,
because this capability offers the potential of reducing the maximum amount of storage required
by a program during its execution without the preplanning becoming hopelessly complex, and
second, because the operating system itself can use it to great advantage. This is because, just
as with the application program, the operating system requires subroutines and data areas.

Overview of Memory Management Techniques


Component # Tasks Memory Comment
Management
PCP(Primary Control 1 1 partition Batch-processing
Program), model – One job

14
had all available
memory until
complete
MFT(Multiprocessing with 4 to 15 Fixed partitions Number of partitions
a Fixed number of Tasks.) = number of
tasks
MVT(Multiprogramming unlimited Variable Created high degree
with a Variable (limitless) partitions of external
number of Tasks) fragmentation

Memory Structure and Implementation

While the 16 KB to 1024 KB main memory available in the OS/360 sounds minute to modern
users, there were programs at the time that fit, in their entirety, into main memory. However, even
programmers of the era had to use techniques to utilize memory to accommodate programs
larger than main memory would allow. A ready to execute program could consist of one or more
subprograms called load modules. Three techniques are described below. These techniques
allowed obvious advantages over the simple structure, in which the entire load module can be
loaded into memory at once.

Planned overlay structure was utilized when not all of the elements of a program needed to be
actively loaded on the system simultaneously. The programmer was able to segment the program
into load modules that need to be present simultaneously in main memory. Therefore, one area of
memory can be used and reused by many different modules. This makes very effective use of
main memory.

Dynamic serial structure was useful when jobs became more complex, as the advantages of
planned overlay diminish as job complexity increases. In this structure, load modules can be
called dynamically, when they are name in the execution of another module. Memory is allocated
as requests arise. Any load module can be called as a sub-module, or subroutine of another
module that is currently executing. When control is returned to the calling module, the memory
taken by the subroutine is released but not overwritten necessarily and if another module calls it,
can be re-linked to without the need for bringing it back into memory.

Dynamic parallel structure is the only structure that is not serial. It creates a task that can
proceed in parallel with other tasks, but since it uses the ATTACH (allows a program to establish
the execution of another program concurrently with its own execution. This permits the application
program to establish its own multitasking environment.) microinstruction and thus requires the
processor to go into kernel, or supervisor, mode, its use needs to be limited. (Witt, 1966)

File System IBM OS 360


A file system is a mapping of file names to a subset of a physical medium

Design Decisions
 File names
 Hierarchical or non-hierarchical
 Types of access control
 Special media characteristics
 Available space
 Media speed

15
 Read-only or read-write
 Versioning
 Fault recovery
 Record-, byte-, or block-structured
 Many more

Types of Names
os.360.looks.hierarchical.but.isnot(really)

Other Disks
On OS/360 and /370, you could use explicit disk identifiers or you could use
the “system catalog” — a hierarchical structure for file names where the
“directories” didn’t need to live on the same disks as the files

Extensions
OS/360 confused extensions with directory levels

Interace to Lower Layers


OS/360 permitted very general I/O requests

IBM OS/360 I/O


Input/Output system

The method of input/output control would have been a major compatibility problem were it not for
the recognition of the distinction between logical and physical structures. Small machines use
CPU hardware for I/O functions; large machines demand several independent channels, capable
of operating concurrently with the CPU and with each other. Such large-machine channels often
each contain more components than an entire small system.

Channel instructions. The logical design considers the channel as an independently operating
entity. The CPU program starts the channel operation by specifying the beginning of a channel
program and the unit to be used. The channel instructions, specialized for the I/O function,
specify storage blocks to be read or written, unit operations, conditional and unconditional
branches within the channel program, etc. When the channel program ends, the CPU program is
interrupted, and complete channel and device status information are available.

An especially valuable feature is command chaining, the ability of successive channel


instructions to give a sequence of different operations to the unit, such as SEARCH, READ,
WRITE, READ FOR CHECK. This feature permits devices to be reinstructed in very short times,
thus substantially enhancing effective speed.

Standard interface. The generalization of the communication between the central processing
unit and an input/output device has yielded a channel which presents a standard interface to the
device control unit. This interface was achieved by making the channel design transparent,
passing not only data, but also control and status information between storage and device. AU
functions peculiar to the device are placed in the control unit. The interface requires a total of 29
lines and is made independent of time through the use of interlocking signals.

Implementation. In small models, the flow of data and control information is time-shared
between the CPU and the channel function. When a byte of data appears from an I/O device, the
CPU is seized, dumped, used and restored. Although the maximum data rate handled is lower
(and the interference with CPU computation higher) than with separate hardware, the function is
identical.

16
Once the channel becomes a conceptual entity, using time-shared hardware, one may have a
large number of channels at virtually no cost save the core storage space for the governing
control words. This kind of multiplex channel embodies up to 256 conceptual channels, all of
which may be concurrently operating, when the total data rate is within acceptable limits. The
multiplexing constitutes a major advance for communications-based systems.

Channels provide the data path and control for I/O devices as they communicate with the CPU. In
general, channels operate asynchronously with the CPU and, in some cases, a single data path
is made up of several subchannels. When this is the case, the single data path is shared by
several low-speed devices, for example, card readers, punches, printers, and terminals. This
channel is called a multiplexor channel. Channels that are not made up of several such
subchannels can operate at higher speed than the multiplexor channels and are called selector
channels. In every case, the amount of data that comes into the channel in parallel from an I/O
device is a byte (i.e., eight bits). All channels or subchannels operate the same and respond to
the same I/O instructions and commands.

Each I/O device is connected to one or more channels by an I/O interface. This I/O interface
allows attachment of present and future I/O devices without altering the instruction set or channel
function. Control units are used where necessary to match the internal connections of the I/O
device to the interface. Flexibility is enhanced by optional access to a control unit or device from
either of two channels.

IBM OS/360 IPC (INTERPROCESS COMMUNICATION)


The ability to allow sharing and mutual exclusivity to a resource is accomplished more by the
programmer than by the operating system. The operating system provides two microinstructions,
enqueue (ENQ) and dequeue (DEQ) which allow the programmer to create a queue that enable
tasks to share resources in a “serially reusable” model. For example, one might look at several
tasks which must update the same table. Each task is given exclusive access to the table and
must complete its work before another task is given access. The programmer may create a
queue to limit access. The queue for the given resource has a control block which contains an
indicator which can be set to declare the resource as busy.

1. OS two microinstructions in sharing resources

Allow the programmer to create a


 ENQUEUE (ENQ)
 DEQUEUE (DEQ
queue that enable task to share
resources in a “serially reusable”
model.

Mutual exclusion and synchronization

The ability to allow sharing and mutual exclusivity to a resource is accomplished more by the
programmer than by the operating system. The operating system provides two microinstructions,
enqueue (ENQ) and dequeue (DEQ) which allow the programmer to create a queue that enable
tasks to share resources in a “serially reusable” model. For example, one might look at several
tasks which must update the same table. Each task is given exclusive access to the table and
must complete its work before another task is given access. The programmer may create a
queue to limit access. The queue for the given resource has a control block which contains an
indicator which can be set to declare the resource as busy.

17
Teleprocessing

Teleprocessing refers to a large variety of data processing applications in which data is


received from or sent to a central data processing system over communication lines, including
ordinary telephone lines. Usually the source or destination of the data is remote from the central
processing system, although it can be in the same building. In any event, the source or
destination points of the data are often called terminals or (for some applications) work stations.
Teleprocessing applications ranges from those in which data is received by a central
processing system and merely stored for later processing, to large complex system applications
in which the hardware and information resources of the central system are shared among a great
many users at remote locations.

General Types of Applications

Several general types of teleprocessing applications that are possible with the operating
system are briefly described below. There are a number of variations and combinations of these
general applications.

DATA COLLECTION
Data Collection is a teleprocessing application in which data is received by a central
processing system from one or more remote terminals and is stored for later processing.
Depending on the specific application, the transfer of data may be initiated either at the terminal
or by the central processing system.

MESSAGE SWITCHING
Message switching is a type of teleprocessing application in which a message received
by the central computing system from one remote terminal is sent to one or more other remote
terminals. Message switching can be used in a nation-wide or world-wide telegraph system or it
can be used by a geographically dispersed business or scientific enterprise to provide
instantaneous communication within the enterprise.

REMOTE JOB PROCESSING


Remote job processing is a type of application in which data processing jobs, like those
that are entered into the system locally, are received from one or more remote terminals and
processed by the operating system.

TIME SHARING
Time sharing is a teleprocessing application in which a number of users at remote
terminals can concurrently use a central computing system.

ONLINE PROBLEM SOLVING


Online problem solving is a form of time sharing that has a great many potential
applications in the fields of education, engineering, and research.

INQUIRY AND TRANSACTION PROCESSING


Inquiry and transaction processing is a teleprocessing application in which inquiries and
records of transactions are receive from a number of remote terminals and are used to
interrogate o update one or more master files maintained by the central computing system

Message Control and Message Processing Programs

Message
 The traditional name for unit information.
 May consist of one or more segments.

18
Two parts of a single-segment message:
a. Message header; followed by,
b. Message text.
Message Header – contains control information concerning the message, such as the
source or destination code of the message, message priority, and the type of
message.
Message Text – consists of the actual information that is routed to a user at a terminal or
to a program in the central computing system that is to process it.

MESSAGE CONTROL PROGRAMS


The main function of a message control program is to control the transmission of
information between an application program in the central computing system and I/O devices at
remote terminals.

Access method routines – routines provided by the IBM for use in creating a message
control program.

Three sets of access method routines:


a. Queued Telecommunication Access Method (QTAM)
b. Telecommunications Access Method (TCAM)
c. Basic Telecommunication Access Method (BTAM)
Queued Telecommunications Access Method
The queued telecommunications access method (and the telecommunications access
method, described following this explanation) can be used to create message control program for
a variety of teleprocessing applications ranging from message switching or data collection to high
volume inquiry and transaction processing.
The message control program serves as an intermediary between the I/O devices at
remote terminals and the application programs that process messages (Fig. 3). It enables the
terminals to be referred to indirectly, in much the same way as local I/O devices are referred to,
using such standard macro instructions as GET, PUT, OPEN, and CLOSE. It automatically
performs detailed functions, such as sending or receiving messages, allocating buffers,
translating message codes, formatting messages, and checking for errors.

Telecommunications Access Method


The telecommunications access method (TCAM) is similar to QTAM, but offers a wider
range of device and program support. For example, TCAM supports local terminals connected
directly to the computing system, as well as remote terminals connected through communication
lines. For remote terminals, TCAM supports both start-stop and binary synchronous methods of
data transmission; binary synchronous support permits the use of faster terminals than are
available with QTAM. In fact, in TCAM, a terminal may be an independent computing system.
To take advantage of TCAM facilities, QTAM application programs can easily be
converted to TCAM. TCAM facilities include:

 Online testing of teleprocessing terminals and control units.


 Input/output error recording.
 Program debugging aids.
 Network reconfiguration facilities.

19
Basic Telecommunications Access Method
The basic telecommunications access method (BTAM) is designed for limited
applications that do not require the extensive message control facilities or QTAM or TCAM, or for
applications that require special facilities not normally found in most applications.
The BTAM facilities provide tools that would be required to design and construct almost
nay teleprocessing application. These include facilities for creating terminal lists and performing
the following operations:
 Polling terminals.
 Answering.
 Receiving messages.
 Allocating buffers dynamically.
 Addressing terminals.
 Dialing.
 Creating buffer chains.
 Changing the status of terminal lists.
When the basic telecommunications access method is used, READ and WRITE macro
instructions, rather than GET and PUT, are used by an application program to retrieve and
send input and output messages.

MESSAGE PROCESSING PROGRAMS


A message processing program is an application program that processes or otherwise
responds to messages received from remote terminals. In designing the program, all of the
facilities of the operating system are available including the language translators, service
programs, and the data, program, and task management facilities can be performed sequentially
as a series of single tasks or more than one message can be processed concurrently

SECURITY

User management in this mainframe OS was virtually non-existent and thus the task of
safeguarding sensitive data is limited to the use of a password. A data set can be flagged as
“protected”. The correct password must then be entered on the console. Passwords are stored in
a control table that has its own security flag set to ”protected” and can only be reached via the
master password.

20
REFERENCES:

Pdf files:

360Revolution_0406,pdf
DonaldMichaelLudlow-Obituary.pdf
GC28-6534-3_OS360introJan72.pdf
IBM 360 Series Computer.pdf
IBMOS360-by-E-Casey-Lunny-2003-Fall.pdf
ibmsj0501B.pdf

MHTML files:

OS-360 and successors - Wikipedia, the free encyclopedia.mhtml

URL:

https://users.cs.jmu.edu/abzugcx/public/Student-Produced-Term-Projects/Operating-
Systems-2003-FALL/IBMOS360-by-E-Casey-Lunny-2003-Fall.pdf
www.research.ibm.com/journal/rd/255/ibmrd2505N.pdf
http://www.cs.columbia.edu/~smb/classes/s06-4118/120.pdf

21

Вам также может понравиться