Вы находитесь на странице: 1из 35
MTech Tech Buzz DataStage Learning Mphasis an HP Company
MTech Tech Buzz
DataStage Learning
Mphasis an HP Company
Agenda • DataStage Architecture • DataStage Parallel Architecture (Pipelining and Partitioning) • Job
Agenda
• DataStage Architecture
• DataStage Parallel Architecture (Pipelining and Partitioning)
• Job Configuration
• DataSet
Mphasis an HP Company
Architecture of DataStage Mphasis an HP Company
Architecture of DataStage
Mphasis an HP Company
IBM Information Server Client Server Architecture IBM InfoSphereDataStage is apart of IBM Information Server Suit.
IBM Information Server Client Server Architecture
IBM InfoSphereDataStage is apart of IBM Information Server Suit. DaatStage enables us to
define the extraction process of data from multiple source systems, transform it in ways
that make it more valuable, and then load it to single or multiple target applications. In
short it is an Extraction Transformation and Loading (ETL) tool
The aboveImageillustrates the Client/Server Architecture ofInformation Server (DataStage
is an integral part of the Information Server)
Mphasis an HP Company
Tiers and components in the IBM Information Server IBM Information Server installation happen in logical
Tiers and components in the IBM Information Server
IBM Information Server installation happen in logical tiers: Client, Engine, Metadata repository, and Services.
Atier isalogicalgrouping ofsoftwarethat youmaptotheph ysicalhardware .
In addition to the main product modules, you install the product components in each tier as needed:
CClliieenntt
Product module clients that are not Web-based and that are used for development and administration
in IBM Information Server.
EEnnggiinnee Runtime engine that runs jobs and other tasks for product modules that require the engine.
MMeettaaddaattaa rreeppoossiittoorryy
Database that stores the shared metadata, data, and configuration for IBM Information Server and the
product modules.
SSeerrvviicceess
Common and product-specific services for IBM Information Server along with IBM WebSphere®
Application Server (application server).
These tiers work together to provide services, job execution, and metadata and other storage.
In the nest slid we will see how these tiers are linked to each other.
Mphasis an HP Company
DataStage Architecture DataStage Clients Administrator DesignerClient DirectorClient Manager Client Client Network
DataStage Architecture
DataStage Clients
Administrator
DesignerClient
DirectorClient
Manager Client
Client
Network
DataStageEngine
SharedRepository
Mphasis an HP Company
DataStage Architecture DDaattaaSSttaaggee CClliieenntt CCoommppoonneennttss DDaattaa ssttaaggee DDeessiiggnneerr::-- It
DataStage Architecture
DDaattaaSSttaaggee CClliieenntt CCoommppoonneennttss
DDaattaa ssttaaggee DDeessiiggnneerr::--
It is used to create the Datastage application known as job. The following activities can be
performed with designer window.
a) Create the source definition.
b) Create the target definition.
c) Develop Transformation Rules
d) Design Jobs.
DDaattaa ssttaaggee DDiirreeccttoorr
It is used to validate, schedule, run and monitor the Data stage jobs.
DDaattaa SSttaaggee AAddmmiinniissttrraattoorr
This components will be used for to perform create or delete the projects. , cleaning metadata
stored in repository and install NLS.
DDaattaa ssttaaggee MMaannaaggeerr
It will be used for to perform the following task:
a) Create the table definitions.
b) Metadata back-upand recovery can be performed.
c) Create the customized components.
Mphasis an HP Company
DataStage Parallel Architecture Mphasis an HP Company
DataStage Parallel Architecture
Mphasis an HP Company
DataStage Parallel Processing Figure below represents one of the simplest jobs you could have —
DataStage Parallel Processing
Figure below represents one of the simplest jobs you could have — a data source, a Transformer
(conversion) stage, and the data target. The links between the stages represent the flow of data into or out
of a stage.
In a parallel job, each stage would normally (but not always) correspond to a process. You can have
multiple instances of each process to run on the available processors in your system
Source
Transformation
Target
A parallel DataStage job incorporates two basic types of parallel processing:
Pipeline
Partitioning
Both of these methods are used at runtime by the Information Server engine to execute the simple job.
MphasaHCinsPompany
9
DataStage Parallel Processing PPiippeelliinnee PPaarraalllleelliissmm:: In the sample datastage job in the previous
DataStage Parallel Processing
PPiippeelliinnee PPaarraalllleelliissmm::
In the sample datastage job in the previous slide, all stages run concurrently, even in a single-node
configuration. As data is read from the source, it is passed to the Transformer stage for transformation,
where it is then passed to the target. Instead of waiting for all source data to be read, as soon as the source
data stream starts to produce rows, these are passed to the subsequent stages. This method is called
pipeline parallelism, and all three stages in our example operate simultaneously regardless of the degree of
parallelism of the configuration file. The Information Server Engine always executes jobs with pipeline
parallelism.
MphasaHisnCPompany
10
DataStage Parallel Processing PPaarrttiittiioonn PPaarraalllleelliissmm:: Partitioning parallelism is key for the
DataStage Parallel Processing
PPaarrttiittiioonn PPaarraalllleelliissmm::
Partitioning parallelism is key for the scalability of DataStage (DS) parallel jobs. Partitioners distribute rows
of a single link into smaller segments that can be processed independently in parallel. Partitioners exist
before any stage that is running in parallel.
MphasaHisnCPompany
11
DataStage Parallel Processing PPaarrttiittiioonn TTyyppeess:: Though partitioning allows data to be distributed across
DataStage Parallel Processing
PPaarrttiittiioonn TTyyppeess::
Though partitioning allows data to be distributed across multiple processes running in parallel, it is
important that this distribution does not violate business requirements for accurate data processing. For
this reason, separate types of partitioning are provided for the parallel job developer.
Partitioning methods are separated into two distinct classes:
• KKeeyylleessss ppaarrttiittiioonniinngg
Keyless partitioning distributes rows without regard to the actual data values. Separate types of keyless
partitioning methods define the method of data distribution.
• KKeeyyeedd ppaarrttiittiioonniinngg
Keyed partitioning examines the data values in one or more key columns, ensuring that records with the
same values in those key columns are assigned to the same partition. Keyed partitioning is used when
business rules (for example, remove duplicates) or stage requirements (for example, join) require
processing on groups of related records.
The default partitioning method used when links are created iAsuto partit ioning .
MphasaHisnCPompany
12
DataStage Parallel Processing AAuuttoo PPoorrttiioonniinngg:: This is the default partitioning method for newly-drawn
DataStage Parallel Processing
AAuuttoo PPoorrttiioonniinngg::
This is the default partitioning method for newly-drawn links, Auto partitioning specifies that the parallel
framework attempts to select the appropriate partitioning method at runtime. Based on the configuration
file, datasets, and job design (stage requirements and properties), Auto partitioning selects between
keyless (same, round-robin, entire) and keyed (hash) partitioning methods to produce functionally correct
results and, in certain cases, to improve performance.
In the Designer canvas, links with Auto partitioning are drawn with the link icon, depicted in below figure:
Auto partitioning is designed to allow beginner DataStage developers to construct simple data flows
without having to understand the details of parallel design principles. However, the Auto partitioning
method might not be the most efficient from an overall job perspective and in certain cases can lead to
wrong results.
2013 6A, ugust
Mphasis an HP Company
13
DataStage Parallel Processing KKeeyylleessssPPoorrttiioonniinngg:: Keyless partitioning methods distribute rows without
DataStage Parallel Processing
KKeeyylleessssPPoorrttiioonniinngg::
Keyless partitioning methods distribute rows without examining the contents of the data. The partitioning
methods are described in following table:
KKeeyylleessss PPaarrttiittiioonn MMeetthhoodd
DDeessccrriippttiioonn
Same
Retainsexistingpartitioningfrompreviousstage
Round-robin
Distributesrowsevenlyacrosspartitions,ina round-
robin partition assignment
Random
Distributes rowsevenlyacrosspartitions inarandom
partition assignment
Entire
Eachpartitionreceivestheentiredataset.
MphasaHisnCPompany
14
DataStage Parallel Processing SSaammee ppaarrttiittiioonniinngg Same partitioning performs no partitioning to the input
DataStage Parallel Processing
SSaammee ppaarrttiittiioonniinngg
Same partitioning performs no partitioning to the input dataset. Instead, it retains the partitioning from
the output of the upstream stage, as shown in Figure below:
Same partitioning does not move
data between partitions (or, in the
case of a cluster or grid, between
servers), and is appropriate when
trying to preserve the grouping of a
previous operation (for example, a
parallel Sort).
It is important to understand the
impact of Same partitioning in a
given data flow. Because Same does
not redistribute existing partitions,
the degree of parallelism remains
unchanged.
August 6, 2013
Mphasis an HP Company
15
DataStage Parallel Processing RRoouunndd RRoobbiinn ppaarrttiittiioonniinngg Round-robin partitioning evenly distributes
DataStage Parallel Processing
RRoouunndd RRoobbiinn ppaarrttiittiioonniinngg
Round-robin partitioning evenly distributes rows across partitions in a round-robin assignment, similar to
dealing cards. This partitioning method guarantees an exact load balance (the same number of rows
processed) between nodes and is very fast.
Because optimal parallel processing
occurs when all partitions have the
same workload, round-robin
partitioning is useful for
redistributing data that ishhiigghhllyy
sskkeewweedd (there are an unequal
number of rows in each partition).
Mphasis an HP Company
16
DataStage Parallel Processing RRaannddoomm ppaarrttiittiioonniinngg Like Round-robin, Random partitioning evenly
DataStage Parallel Processing
RRaannddoomm ppaarrttiittiioonniinngg
Like Round-robin, Random partitioning evenly distributes rows across partitions, but using a random
assignment. As a result, the order that rows are assigned to a particular partition differ between job runs.
Because the random partition
number must be calculated,
Random partitioning has a slightly
higher overhead than Round-robin
partitioning.
Though in theory Random
partitioning is not subject to regular
data patterns that might exist in the
source data, it is rarely used in
functional data flows because,
though it shares basic principle of
Round-robin partitioning, it has a
slightly larger overhead.
Mphasis an HP Company
17
DataStage Parallel Processing EEnnttiirree ppaarrttiittiioonniinngg Entire partitioning distributes a complete copy of
DataStage Parallel Processing
EEnnttiirree ppaarrttiittiioonniinngg
Entire partitioning distributes a complete copy of the entire dataset to each partition. Entire partitioning is
useful for distributing the reference data of a Lookup task
(this might or might not involve the Lookup stage).
On clustered and grid
implementations, Entire partitioning
might have a performance impact,
as the complete dataset must be
distributed each node. across the network to
It is useful when you want the
benefits of parallel execution, but
every instance of the operator
needs access to the entire input
data set. You are most likely to use
this partitioning method with stages
that create lookup tables from their
input
Mphasis an HP Company
18
DataStage Parallel Processing KKeeyyeedd PPoorrttiioonniinngg:: Keyed partitioning examines the data values in one or
DataStage Parallel Processing
KKeeyyeedd PPoorrttiioonniinngg::
Keyed partitioning examines the data values in one or more key columns, ensuring that records with the
same values in those key columns are assigned to the same partition. Keyed partitioning is used when
business rules (for example, Remove Duplicates) or stage requirements (for example, Join) require
processing on groups of related records. Keyed partitioning is described in following table:
KKeeyyeedd PPaarrttiittiioonn MMeetthhoodd
DDeessccrriippttiioonn
Hash
the Assignsrowswiththesamevaluesinoneormorekeycolumnsto same partition using an internal hashing algorithm.
Modules
Assignsrowswiththesamevaluesinasingleintegerkeycolumnto
the same partition using a simple modulus calculation.
Range
Assignsrowswiththesamevaluesinoneormorekeycolumnsto
the same partition using a specified range mapgenerated by pre-
reading the dataset.
DB2
ForDB2EnterpriseServerEditionwithDPF(DB2/UDB)only
Matches the internal partitioning of the specified source or target
table.
MphasaHisnCPompany
19
DataStage Parallel Processing HHaasshh ppaarrttiittiioonniinngg Hash partitioning assigns rows with the same values in
DataStage Parallel Processing
HHaasshh ppaarrttiittiioonniinngg
Hash partitioning assigns rows with the same values in one or more key columns to the same partition
using an internal hashing algorithm.
the diagram shows the possible results of hash partitioning a data set using the field AAGGEE as the
partitioning key. Each record with a given age is assigned to the same partition, so for example records with
age 36, 40, or 22 are assigned to partition 0. The height of each bar represents the number of records in
the partition.
MphasaHisnCPompany
20
DataStage Parallel Processing HHaasshh ppaarrttiittiioonniinngg Hash is very often used and sometimes improves
DataStage Parallel Processing
HHaasshh ppaarrttiittiioonniinngg
Hash is very often used and sometimes improves performance, however it is important to have in mind
that hash partitioning does not guarantee load balance and misuse may lead to skew data and poor
performance. Hash does not guarantee "continuity“.
Following are few key things about hash partitioning.
Hash partitioning ensures that rows with identical key values will be placed on the same partition.
Hashing Hash partitioning is based on does a function not necessarily of one or result more in columns an even (the distribution hash partitioning of data among keys) partitions. in each record.
If the same hash function is applied to two tables, rows that share the same key will be co-located on the
same disk.
When hash partitioning, we should select hashing keys that create a large number of partitions. So that
one node is not over-loaded.
Fields that can only assume two values, such as yes/no, true/false, are particularly poor choices as hash
keys.
In DataStage, the data type of a partitioning key may be any data type except raw, subrecord, tagged
aggregate, or vector.
MphasaHisnCPompany
21
DataStage Parallel Processing MMoodduulleess ppaarrttiittiioonniinngg Modulus partitioning uses a simplified algorithm
DataStage Parallel Processing
MMoodduulleess ppaarrttiittiioonniinngg
Modulus partitioning uses a simplified algorithm for assigning related records based on a single integer key
column. It performs a modulus operation on the data value using the number of partitions as the divisor.
The remainder is used to assign the value to a given partition:
Partition = MOD
(key_value / number of partitions)
Like hash, the partition size of modulus partitioning is equally distributed as long as the data values in the
key column are equally distributed.
key Because column. modulus Modulus partitioning partitioning is simpler cannot and be used faster for than composite hash, it keys, must or be for used a non-integer if you have key a single column. integer
Following is an example of Modules partitioning:
EEMMPP
IIDD
PPAARRTT
00
PPAARRTT
11
PPAARRTT
22
PPAARRTT
33
100
MMoodd(( EEMMPP
IIDD // )44
101
100
101
102
103
102
104
105
106
107
103
108
104
105
106
107
August 6, 2013
Mphasis an HP Company
22
108
DataStage Parallel Processing RRaannggee ppaarrttiittiioonniinngg As a keyed partitioning method, Range partitioning
DataStage Parallel Processing
RRaannggee ppaarrttiittiioonniinngg
As a keyed partitioning method, Range partitioning assigns rows with the same values in one or more key
columns to the same partition. Given a sufficient number of unique values, Range partitioning ensures
balanced workload by assigning an approximately equal number of rows to each partition, unlike Hash and
Modulus partitioning where partition skew is dependent on the actual data distribution
To achieve this balanced distribution, Range
partitioning must read the dataset twice: the
first to create aRRaannggee MMaapp ffiillee, and the second
to actually partition the data in a flow using the
Range Map. A Range Map file is specific to a
given parallel configuration file.
The read twice penalty of Range partitioning
limits its use to specific scenarios, typically
where the incoming data values and
distribution are consistent over time. In these
instances, the Range Map file can be re-used.
Mphasis an HP Company
23
Job Configuration Mphasis an HP Company
Job Configuration
Mphasis an HP Company
Job Configuration AAPPTT CCOONNFFIIGGUURRAATTIIOONN FFIILLEE In Data Stage, the degree of parallelism, resources being
Job Configuration
AAPPTT CCOONNFFIIGGUURRAATTIIOONN FFIILLEE
In Data Stage, the degree of parallelism, resources being used, etc.
are all determined during the run time based entirely on the configuration provided
In the AAPPTT CCOONNFFIIGGUURRAATTIIOONN FFIILLEE. This is one of the biggest strengths of Data Stage.
CONFIGURATION FILE in Data stage contains the:
• Different processing nodes(logical),
• The disk space provided for each processing node.
There is a default configuration file available whenever the server is installed.
You can typically find it under the\\IIBBMM\\IInnffoorrmmaattiioonn SSeerrvveerr\\SSeerrvveerr\\CCoonnffiigguurraattiioonnss
folder with the nameddeeffaauulltt aapptt.
MphasaHisnCPompany
25
Job Configuration SSaammppllee CCoonnffiigguurraattiioonn FFiillee:: The following example shows a default configuration
Job Configuration
SSaammppllee CCoonnffiigguurraattiioonn FFiillee::
The following example shows a default configuration file from a three processor SMP computer syste. m
{
node "node1"
{
fastname "R101"
pools ""
resourcedisk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "R101"
pools ""
resourcedisk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node “node3″
{
fastname “R101″
pools “” “sort”
resource disk “C:/IBM/InformationServer/Server/Datasets/Node1″ {pools “”}
resource scratchdisk “C:/IBM/InformationServer/Server/Scratch/Node1″ {pools ”" }
}
}
The explanation of this config file is given in the next slide.
MphasaHisnCPompany
26
Job Configuration NNooddee:: The name of the processing node that this entry defines. FFaassttnnaammee:: The
Job Configuration
NNooddee::
The name of the processing node that this entry defines.
FFaassttnnaammee::
The name of the node as it is referred to on the fastest network in the system. For an SMP system, all
processors share a single connection to the network, so the fastname node is the same for all the nodes
that you are defining in the configuration file.
PPoooollss::
Pools allow us to associate different processing nodes based on their functions and characteristics. So if
you see an entry other entry like “node3” or other reserved node pools like “sort”,”db2”,etc
Then it
means that this node is part of the specified pool. A node will be by default associated to the default pool
which is indicated by “”. Now if you look at node3 can see that this node is associated to the sort pool. This
will ensure that that the sort stage will run only on nodes part of the sort pool.
RReessoouurrccee ddiisskk::
Specifies the name of the directory where the processing node will write data set files. When you create a
data set or file set, you specify where the controlling file is called and where it is stored, but the controlling
file points to other files that store the data. These files are written to the directory that is specified by the
resource disk field
RReessoouurrccee SSccrraattcchh ddiisskk::
Specifies the name of a directory where intermediate, temporary data is stored. The location of temporary
files created during Datastage processes, like lookups and sorts will be specified here
MphasaHisnCPompany
27
DataSet Mphasis an HP Company
DataSet
Mphasis an HP Company
Filesets in DataStage FFiilleeSSttaaggee SSuuggggeesstteeddUUssaaggee LLiimmiittaattiioonnss DataStage (DS) offers
Filesets in DataStage
FFiilleeSSttaaggee
SSuuggggeesstteeddUUssaaggee
LLiimmiittaattiioonnss
DataStage (DS) offers
various parallel stages
for reading from and
writing to files. In this
chapter we provide
suggestions for when to
use a particular stage,
and any limitations that
are associated with that
stage.
SequentialFile
Read and write standard files in a single
format.
Cannot write to a single file in
parallel, performance penalty of
conversion, does not support
hierarchical data files
ComplexFlat File
Need to read source data in complex
(hierarchical) format, such as mainframe
sources with COBOL copybook file
definitions.
Cannot write in parallel;
performance penalty of format
conversion.
DataSet
IntermediatestoragebetweenDataStage
paralleljobs
Can only be read from and
written to by DataStage parallel
jobs or orchadmin command.
A summary of the
various stages are
provided in Table
Slightly higher overhead than
FileSet
Needtoshareinformation
with external applications,
can write in parallel
(generatesmultiple
segmentfiles).
Dataset
SAS Parallel
Needto sharedata with an external Parallel SAS Requires Parallel SAS, can only be
read from / written to
application.(RequiresSAS connectivity license
by DataStage or Parallel SAS
for DataStage).
Lookup File Set
Rare instances where Lookup reference data is
required by multiple jobs and isnot updated
frequently.
Can only be written – contents
cannot be read or verified. Can
only be used as reference link on a
Lookup stage.
Mphasis an HP Company
Dataset The Data Set stage is a file stage. It allows you to read data
Dataset
The Data Set stage is a file stage. It allows you to read data from or write data to a data set. The stage can
have a single input link or a single output link. It can be configured to execute in parallel or sequential
mode.
WWhhaatt iiss aa ddaattaa sseett??
Parallel jobs use data sets to manage data within a job. You can think of each link in a job as carrying a
data set. The Data Set stage allows you to store data being operated on in a persistent form, which can
then be used by other WebSphere DataStage jobs. Data sets are operating system files, each referred to by
a control file, which by convention has the suffix .ds. Using data sets wisely can be key to good
performance in a set of linked jobs. You can also manage data sets independently of a job using the Data
Set Management utility, available from the WebSphere DataStage Designer or Director
It preserves partition. it stores data on the nodes so when you read from a dataset you don't have to
repartition the data
it stores data in binary in the internal format of datastage. So it takes less time to read/write from ds to
any other
Mphasis an HP Company
Dataset By and large, datasets might be interpreted as uniform sets of rows within the
Dataset
By and large, datasets might be interpreted as uniform sets of rows within the internal representation of
Framework. Commonly, the two types of datasets might be distinguished:
According to the scheme above, there are the two groups of DatasetPPseerrssiisstteenntt aanndd VViirrttuuaall
The first type, persistent Datasets are marked with**
ddss
extensions, while for second type, virtual datasets**
vv
extension is reserved. (It's important to mention, that no *.v files might be visible in the Unix file system, as
long as they exist only virtually, while inhabiting RAM memory. Extension *.v itself is characteristic strictly for
OSH - the Orchestrate language of scripting).
Further differences are much more significant. Primarilypp,eerrssiisstteenntt DDaattaasseettss aarree bbeeiinngg ssttoorreedd iinn UUnniixx ffiilleess
uussiinngg iinntteerrnnaall DDaattaassttaaggee EEEE ffoorrmmaa,tt while vviirrttuuaall DDaattaasseettss aarree nneevveerr ssttoorreedd oonn ddiisskk- they do exist within links,
and in EE format, but in RAM memory. Finally, persistenDDt aattaasseettss aarree rreeaaddaabbllee aanndd rreewwrriitteeaabbllee wwiitthh tthhee
DDaattaaSSeett SSttaaggee, and vviirrttuuaall DDaattaasseettss -- mmiigghhtt bbee ppaasssseedd tthhrroouugghh iinn mmeemmoorryy.
Mphasis an HP Company
Dataset Typically Orchestrate Datasets are a single-file equivalent for the whole sequence of records. With
Dataset
Typically Orchestrate Datasets are a single-file equivalent for the whole sequence of records. With datasets,
they might be shown as a one object. Datasets themselves help ignoring the fact that demanded data really
are compounded of multiple and diverse files remaining spread across different sectors of processors and disks
of parallel computers. Along that, complexity of programming might be significantly reduced, as shown in the
example below :
Datasetparallel architecture
PPrriimmaarryy mmuullttiippllee ffiilleess -- shown on the left side of thescheme - have been bracketed togethe,rwhat resulted in
five nodes. While using datasets, all the files, all the nodes might be boiled down to the only onSSeii,nnggllee
DDaattaasseett -- shown on the right side of the scheme. Thereupon, you might program only one file and get results
on all the input files. That significantly shorten time needed for modifying the whole group of separate files,
and reduce the possibility of engendering accidental errors. What are its measurable profits? Mainly,
significantly increasing speed of applications basing on large data volumes.
Mphasis an HP Company
Dataset TTaarrggeett CCaatteeggoorryy FFiillee The name of the control file for the data set. You
Dataset
TTaarrggeett CCaatteeggoorryy
FFiillee
The name of the control file for the data set. You can browse for the file or enter a job parameter. By
convention, the file has the suffix .ds.
UUppddaattee PPoolliiccyy
Specifies what action will be taken if the data set you are writing to already exists. Choose from:
AAppppeenndd. Append any new data to the existing data.
CCrreeaattee (Error if exists). WebSphere DataStage reports an error if the data set already exists.
OOvveerrwwrriittee. Overwrites any existing data with new data.
UUssee eexxiissttiinngg ((DDiissccaarrdd rreeccoorrddss)). Keeps the existing data and discards any new data.
UUssee eexxiissttiinngg ((DDiissccaarrdd rreeccoorrddss aanndd sscchheemmaa)). Keeps the existing data and discards any new data and its
associated schema.
The default isOOvveerrwwrriittee.
SSoouurrccee ccaatteeggoorryy
FFiillee
The name of the control file for the data set. You can browse for the file or enter a job parameter. By
convention the file has the suffix .ds.
Mphasis an HP Company
Feedback Feedbackis thebestgift togiveand toget Mphasis an HP Company
Feedback
Feedbackis thebestgift togiveand toget
Mphasis an HP Company
Thank you Mphasis an HP Company
Thank you
Mphasis an HP Company