Академический Документы
Профессиональный Документы
Культура Документы
c2006 Pervasive Software Inc. All rights reserved. Design by Pervasive. Pervasive is a registered trademark, and "Integrating the Interconnected World" is a trademark of Pervasive Software Inc. Cosmos, Integration Architect, Process Designer, Map Designer, Structured Schema Designer, Extract Schema Designer, Document Schema Designer, Content Extractor, CXL, Process Designer, Integration Engine, DJIS, Data Junction Integration Suite, Data Junction Integration Engine, XML Junction, HIPAA Junction, and Integration Engineering are trademarks of Pervasive Software Inc.. All names of databases, formats and corporations are trademarks or registered trademarks of their respective companies. This exercise scenario workbook was written for Pervasives Integration Platform software, version 8.14.
Table of Contents: Forward........................................................................................................................................... 6 The Pervasive Integration Platform ............................................................................................... 7 Architectural Overview of the Integration Platform ....................................................................... 8 Design Tools ................................................................................................................................. 9 MetaData Tools........................................................................................................................... 13 Production Tools......................................................................................................................... 14 Installation Folders ..................................................................................................................... 15 Repository Explorer...................................................................................................................... 17 Define a Workspace and Repository ........................................................................................ 18 Configuring Database Connectivity (ODBC Drivers)............................................................... 19 Splash Screen Licensing and Version info............................................................................. 20 Map Designer Fundamentals of Transformation...................................................................... 21 Map Designer The Foundation ................................................................................................. 22 Interface Familiarization.......................................................................................................... 23 Default Map ............................................................................................................................ 27 Connectors and Connections Accessing Data ........................................................................... 31 Factory Connections ................................................................................................................ 32 User Defined Connections ....................................................................................................... 34 Macro Definitions.................................................................................................................... 35 Automatic Transformation Features ............................................................................................ 37 Source Data Features Sort..................................................................................................... 38 Source Data Features Filter ................................................................................................... 42 Target Output Modes - Replace, Append, Clear and Append.................................................... 46 Target Output Modes Delete................................................................................................. 49 Target Output Modes Update................................................................................................ 51 The RIFL Script Editor................................................................................................................ 55 RIFL Script - Functions ........................................................................................................... 56 RIFL Script Flow Control ..................................................................................................... 60 Transformation Map Properties .................................................................................................. 65 Reject Connection Info ............................................................................................................ 66 Event Handlers & Actions ........................................................................................................... 69 Understanding Event Handlers................................................................................................. 70 Event Sequence Issues............................................................................................................. 73 Using Action Parameters Conditional Put ............................................................................. 78 Using OnDataChange Events ............................................................................................... 81 Trapping Processing Errors With Events.................................................................................. 85 Comprehensive Review................................................................................................................ 89 Metadata Using the Schema Designers ...................................................................................... 90
Structured Schema Designer........................................................................................................ 91 No Metadata Available (ASCII Fixed) ..................................................................................... 92 External Metadata (Cobol Copybook)...................................................................................... 93 Binary Data and Code Pages.................................................................................................... 95 Reuse Metadata (Reusing a Structured Schema)....................................................................... 97 Multiple Record Type Support in Structured Schema Designer .............................................. 100 Conflict Resolution................................................................................................................ 103 Extract Schema Designer .......................................................................................................... 106 Interface Fundamentals & CXL ............................................................................................. 108 Data Collection/Output Options............................................................................................. 112 Extracting Fixed Field Definitions ......................................................................................... 114 Extracting Variable Fixed Field Definitions ........................................................................... 116 EasyLoader.................................................................................................................................. 119 Overview & Introductory Presentation ...................................................................................... 120 Using EasyLoader to create/run maps using wizard............................................................. 121 Using EasyLoader to create/run maps without wizard ......................................................... 125 Creating Targets for use with Easy Loader............................................................................. 129 Process Designer for Data Integrator ......................................................................................... 134 Process Designer Fundamentals................................................................................................ 135 Creating a Process ................................................................................................................. 136 Conditional Branching The Step Result Wizard .................................................................. 142 Parallel vs. Sequential Processing .......................................................................................... 146 FileList - Batch Processing Multiple Files.............................................................................. 148 Integration Engine ...................................................................................................................... 156 Syntax: Version Information.................................................................................................. 157 Options and Switches ............................................................................................................ 158 Execute A Transformation..................................................................................................... 160 Using a -Macro_File Option............................................................................................... 161 Command Line Overrides Source Connection..................................................................... 162 Ease of Use: Options File ...................................................................................................... 163 Executing a Process ............................................................................................................... 164 Using the -Set Variable Option........................................................................................... 165 Scheduling Executions........................................................................................................... 167 Mapping Techniques................................................................................................................... 168 Multiple Record Type Structures................................................................................................ 169 Multiple Record Type 1 One-to-Many ................................................................................ 170 Multiple Record Type 2 Many-to-One ................................................................................ 174 User Defined Functions............................................................................................................. 178 Code Reuse Save/Open a RIFL script Code Modules .......................................................... 179 Code Reuse - Code Modules.................................................................................................. 180 Lookup Wizards......................................................................................................................... 182 Flat File Lookup .................................................................................................................... 183 Dynamic SQL Lookup........................................................................................................... 185 Incore Table Lookup ............................................................................................................. 188 4 Pervasive Integration Platform Training - End User
RDBMS Mapping ...................................................................................................................... 190 Select Statements SQL Passthrough .................................................................................... 191 Integration Querybuilder........................................................................................................ 193 DJX in Select Statements Dynamic Row sets ...................................................................... 197 Multimode Introduction......................................................................................................... 200 Multimode Data Normalization........................................................................................... 205 Mulitmode Implementation with Upsert Action ..................................................................... 212 Management Tools ...................................................................................................................... 217 Upgrade Utility ......................................................................................................................... 218 Upgrading Maps from Prior Versions .................................................................................... 219 Engine Profiler.......................................................................................................................... 221 Data Profiler............................................................................................................................. 222
Forward
This course is designed to be presented in a classroom environment in which each student has access to their own computer that has the Pervasive Integration Products installed as well as this Fundamentals courseware. It could be used as a stand-alone tutorial course if the student is already familiar with the interface of the Pervasive tools. The Fundamentals course is not meant to be a comprehensive tutorial of all of our products. At the end of this course it is our intention that a student have a basic understanding of Map Designer, Structured Schema Designer, Extract Schema Designer, Process Designer, and the Integration Engine. The student should know how to use and how to expand their own knowledge of these tools. Further training can be obtained from Pervasive Training Services. Any path mentioned in this document assumes a default installation of the Pervasive software and the Fundamentals courseware. If the student installs differently, that will have to be taken into account when doing exercises or following links. We hope that the student enjoys this class and takes away everything needed. We welcome any feedback.
This section describes the integration stack from the users perspective.
Design Tools
Here we discuss the tools that we use to create maps (transformations), schemas, profiles and processes. Data Profiler Data Profiler is a data quality analysis tool. It analyzes data sets accurately and efficiently, and generates detailed reports of the quality of incoming data. The user defines metrics to which each record in the input file is then compared. These metrics include compliance testing, conversion testing, and statistics collection. There are predefined metrics to streamline metrics definition. Data Profiler can generate clean data files containing records that fit all of the data analysis criteria and dirty data files containing records that fail any of the data analysis criteria. The reports and the clean and dirty data files can be used as part of an overall business flow that prepares incoming data for processing. This document does not have exercises or courseware on Data Profiler, though there is a one-day course available from Pervasive Training Services. Structured Schema Designer The Structured Schema Designer provides a visual user interface for designing structural data files. The resulting metadata is stored as Structured Schema files with an .ss.xml extension. The .ss.xml files include schema, record recognition rule and record validation rule information. In the Structured Schema Designer, you can create or modify schemas that can be accessed in the Map Designer to provide structure for Source or Target files. You can use the Data Parser to manually parse flat Binary, fixed-length ASCII, or record manager files. The Data Parser defines Source record length, defines Source field sizes and data types, defines Source data properties, assigns Source field names, defines Schemas with multiple record types. You can also use the Structured Schema Designer to import schemas from outside sources such as cobol copybooks, XML DTDs, or Oracle DDLs. The ss.xml files that are created by Structured Schema Designer are used as input in Map Designer as part of a source or target connection. There are courseware and exercises on the Structured Schema Designer in this document.
Extract Schema Designer The Extract Schema Designer is a parser tool that allows the user to visually select fields and records from text files that are of an irregular format. Some examples are:
Printouts from programs captured as disk files Reports of any size or dimension 9 Pervasive Integration Platform Training - End User
ASCII or any type of EBCDIC text files Spooled print files Fixed length sequential files Complex multi-line files Downloaded text files (e.g., news retrieval, financial, real estate...) HTML and other structured documents Internet text downloads E-mail header and body On-line textual databases CD-ROM textbases Files with tagged data fields
Extract Schema Designer creates schemas that are stored as CXL files. These files are then used as input in Map Designer as part of a source connection. There are courseware and exercises on the Extract Schema Designer in this document.
Document Schema Designer Document Schema Designer is a Java-based tool that allows you to build templates for E-document files. You can custom-build schema subsets for specific EDI Trading Partner and TranType scenarios. In addition, the Document Schema Designer is also very useful to those working with HL7, HIPAA, SAP (IDoc), SWIFT and FIX data files. You can develop schema files for all e-documents that are compatible with Map Designer. The document schemas serve several useful purposes: File Structure Metadata Support Parsing Capabilities Validation Support The Document Schema Designer produces DS.XML document schema files that can be used as input in Map Designer as part of a source or target connection. In an easy-to-use GUI interface, the user selects desired segments from the "template" document schemas that are generated from the controlling standards documentation. The segments are saved in a schema file that can be edited. The user may also add segments from a "master" segment library, add loops/segments/composites/elements by hand, add discrimination rules for distinguishing loops/segments of the same type at the same level, and use code tables for data validation. The user can copy, paste and delete any part of the structure, including the segments, elements, composites loops, and fields (and their subordinate loops/segments/subcomponents). The Document Schema Designer produces DS.XML document schema files that can be used as input in Map Designer as part of a source or target connection. These files can also be used in a Process as part of a Validation step. This document does not have exercises or courseware on Document Schema Designer, though there is a one-day course available from Pervasive Training Services.
Join Designer Join Designer is an application that allows the user to join two or more single-record type data sources prior to running a Map Designer Transformation on them. These sources do not have to be of the same type. For example, an SQL database table could be joined with a simple ASCII text file. The user first uses Source View Designer to create Source View Files that hold metadata about the Sources. From these a Join View File is created, which contains the metadata needed by Map Designer to treat the Source files as if they were a single Source. The user then supplies this Join View File to Map Designer using "Join Engine" as the connection type. The original Source files and the Source View Files must still be available in the locations specified in the Join View File. When a join is saved, a Join View File (.join.xml) is created. This can be supplied to Map Designer as a Source file or used to create further joins. While a join is limited to two Source files, you can use another join as a Source, thus building up nested joins to any level of complexity. This document does not have exercises or courseware on Join Designer, though there is an exercise in the Advanced course available from Pervasive Training Services. Map Designer Map Designer is the heart of the integration product tool set. It transfers data among a wide variety of data file types. In Map Designer, to transfer data, the user designs and runs what is called a Transformation or a Map ( the two words are synonymous). Each Transformation created contains all the information Map Designer needs to transform data from an existing data file or table to a new Target data file or table, including any modifications that may be necessary. Map Designer solves complex Transformation problems by allowing the user to:
transform data between applications combine data from external Sources change data types add, delete, rearrange, split or concatenate fields parse and select substrings; pad or truncate data fields clean address fields and execute unlimited string and numerical manipulations control log errors and events define external table lookups
Map Designer creates two files (tf.xml and map.xml) that contain all the information necessary to run a transformation. A transformation can be run from Map Designer, Process Designer or the Integration Engine. Map Designer is covered extensively in this course and is also explored in the Advanced course and the EDI/HIPAA course.
Easy Loader Easy Loader is a one-to-one record flat file mapper that creates intermediate data load files. This means that Easy Loader supports single record Source data and creates single record flat data load files. All Target type connectors and schemas are predefined, which makes the user interface easy to learn. 11 Pervasive Integration Platform Training - End User
In Easy Loader, all events and actions required at run time are automatically created and hidden from view at design time. This is an advantage when the end user is not proficient with the Map Designer tool. The idea is that the designer will create most of the Map at design time, leaving the simple mapping of the source into the target to the end user. Below is a list of more Easy Loader advantages over Map Designer:
predefined Targets and Target schemas automatic addition of events and actions needed to run simplified mapping view auto-matching by name (case-insensitive; Map Designer is case sensitive) single field mapping wizard launched from the Targets fields grid (use wizard to map all fields, or just one field at a time) predefined Target record validation rules specific validation error logging for quick data cleansing
Easy Loader requires a pre-defined Structured Schema and it creates a tf.xml and a map.xml file that can be used in the same ways that Transformations built in Map Designer are used. There are courseware and exercises on the Easy Loader in this document.
Process Designer Process Designer is a graphical data transformation management tool that can be used to arrange a complete transformation project. Here are some of the Steps that a user can put into a process:
Map Designer Transformation SQL Command Decision RIFL Scripting Command Line Application SQL Server DTS Package Sub-process Validation XSLT Queue Iterator Aggregator Invoker Transformer
Once the user has organized these Steps in the order of execution, the entire workflow sequence can be run as one unit. This workflow is saved as an IP.XML file which can be run from the Process Designer or from Integration Engine. Process Designer processes can also be packaged using the Repository Manager. This packaging gathers all of the files that are required by the process and puts them into a single DJAR file that can then be run from the Integration Engine. This courseware covers some basic functionality of the Process Designer. Both the Advanced and the EDI/HIPAA courses go into the more advanced functionality of this tool.
MetaData Tools
With one exception (See the chapter on Extract Schema Designer), all of our design tools create their maps, schemas or processes as XML files. The Metadata tools organize and manipulate those files. Repository Explorer The Repository Explorer is the central location from which the user can launch all of the Designers, including the Map Designer, Process Designer, Join Designer, Extract Schema Designer, Structured Schema Designer, Source View Designer and Document Schema Designer. The User can also open any Repository that has been created, and then open Transformations, Processes or Schema files in that Repository list. The Repository Explorer can also access the version control functionality of CVS or Microsoft Visual SourceSafe, and can check files in and out of repositories using commands in Repository Explorer. There is courseware about the Repository Explorer in this document. Repository Manager Repository Manager is designed to facilitate the tasks of managing large numbers of Pervasive design documents, contained in multiple repositories in multiple workspaces. Repository Manager provides a single application to directly access any number of Pervasive design documents, view their contents, make simple updates, bundle them into a package, and generate reports. The features of Repository Manager include:
Open and work with any number of defined Workspaces. Browse the hierarchy of Workspaces, Repositories, Collections, and Documents. Search for documents based on text strings, regular expressions, date ranges, Document Types, document-specific fields. Make minor updates to documents. Generate an impact analysis of proposed document modifications. Import and export Documents and Collections. Package Processes and related documents into a single entity (DJAR) that can be more easily managed and transported. View and print documents and Reports.
This document does not have exercises or courseware on Repository Manager, though there is an exercise in the Advanced course available from Pervasive Training Services.
Production Tools
These are the tools that allow the user to automate their Tranformations and Processes in their production environment. Integration Engine Integration Engine is an embedded data Transformation engine used to deploy runtime data replication, migration and Transformation jobs on Windows or Unix-based systems. Because Integration Engine is a pure execution engine with no user interface components, it can perform automatic, runtime data transformations quickly and easily, making it ideal for environments where regular data transformations need to be scheduled and launched. Integration Engine supports the following operating systems: Windows 2000, Windows XP, Windows Server 2003, HPUX, Sun Solaris, IBM AIX, and Linux. The Integration Engine has the capability to work with multiple threads if a multi-threaded license is purchased. There is courseware about the Integration Engine in this document.
Integration Server Integration Server is actually an SDK that is installed by default when the integration platform is installed. The core components of the Integration Server SDK are the Engine Controller, Engine Instances (Managed Pool), and the Client API that accesses the Engine Controller through a proxy. Server stability is maintained, scalability enhanced, and resources are spared through the use of a control-managed pool of EngineExe objects. This allows the Integration Engine to be called as a service. This document does not have exercises or courseware on the Integration Server, though there is a oneday course available from Pervasive Training Services that covers the Integration Server and the Integration Manager.
Integration Manager Through a browser-based interface, Integration Manager performs deployment, scheduling, on-going monitoring, and real-time reporting on individual or groups of distributed Integration Engines. Since all management is performed from a single administration point, Integration Manager improves operational efficiency in the management of geographically distributed Integration Engines. With the ability to remotely administer any number of integration points throughout the organization, customers can build out their integration infrastructure as required, using a flexible and scalable architecture designed for easy manageability. In other words, the Integration Manager allows the user to schedule and deploy multiple packages (DJAR) amongst multiple Integration Servers across an enterprise. This document does not have exercises or courseware on the Integration Manager, though there is a one-day course available from Pervasive Training Services that covers the Integration Server and the Integration Manager.
Installation Folders
The following screen-shots indicate the default locations and purposes for each of the installation folders. The primary installation folder contains all the EXEs, DLLs, OCX, etc. that the software needs to run. Its default location is C:\Program Files\Pervasive\Cosmos\Common800:
The application also uses other system-generated folders. These are similar to INI files in other applications and contain application data. The information is stored in XML documents and the design tools access them for things like user Preferences in each Design tool. These files are specific to the username and can be found in the Documents and Settings folder:
The work a user performs is stored in specification files in the Repository (which we will define next). These are XML files also and can be read with any Internet browser, as can the Settings files:
One other important folder is created by default when you select a Workspace. It is named Workspace1 and contains metadata files that are created by certain design tools. For example, CXL scripts created by Extract Schema Designer are stored here. RIFL scripts, user-defined code modules, and user-defined connections are also saved into this folder by default. Most importantly, this is the directory for the MacroDef file. We will discuss this file in detail in the Connectors And Connections module.
Repository Explorer
The Repository Explorer is at the heart of the integration product design environment. In this central location, you can launch all of the Designers, including the Map Designer, Process Designer, Join Designer, Extract Schema Designer, Structured Schema Designer, Source View Designer and Document Schema Designer. You can create multiple Repositories for any given Workspace. This allows you to separate your metadata how you wish. For example, you could have a Repository for a specific project and another Repository for a different project. You may also wish to create Repositories for Development, QA, and Production metadata as you promote your specification files from one to the next. In the Repository Explorer, you can also access the version control functionality of CVS or Microsoft Visual SourceSafe in order to check your files in and out of repositories using commands in Repository Explorer. IntegrationArchitect_RepositoryExp.ppt
Keywords: Define the Training Workspace and Repository Start Repository Explorer Change the current Workspace Root Directory Select File>Manage Workspaces (Ctrl+Alt W). Change the Workspaces Root Directory to the Cosmos_Work folder. This will allow you to use a list of Repositories and Macro definitions specific to your current Workspace. Modify the default Repository in current Workspace Click on the Repositories button in the bottom right-hand corner of the Workspaces dialog box. When you change the Root Directory, a default Workspace and Repository will be created. We are going to modify the default for use during training. Change the name XMLDB to Fundamentals and navigate to the folder with the name C:\Cosmos_Work\Fundamentals by clicking the Find button. We will use this Repository to store all of the XML schema and metadata for the training project.
Description
Splash Screen - Shows the Splash Screen for Repository Explorer. Credits - Gives a list of credits for third party software components used by the Product. Version - Displays the following sections: o License Name: Displays the PATH to the Product License file and the License file name. o Serial Number: Displays the Product serial number. o Version: Displays the Product build version number. o Subscription Ends: Displays the date the license file will expire. o Users: Displays the number of users licensed for the Product. o Single User License For: Name: Name of the person licensed for the Product. Company: Name of the company licensed for the Product. Support - Displays the Technical Support address, phone/fax number, and web address. Licensing - Displays all of the Connectors, Features and Products that are licensed in the Product.
Interface Familiarization
Objectives The Map Designer icons offer you shortcuts when you are creating, modifying, and viewing maps. Here is information pulled from the Help File about the icons and their descriptions. Description
Default Map
Objectives At the end of this lesson you should understand the Source and Target tabs and be able to use the new Simple Map view to create a Transformation. Keywords: Drag and Drop Mapping Description In this exercise well follow the flow chart below and create a simple map.
Exercise First we need to define our source. 1. Open Map Designer. 2. To the right of the long box next to Source Connection, click the down arrow. This will open the Select Connection Dialog box. 3. Notice there are three tabs. The first time you open this it will open on the Factory Connections tab, but after the first time it will open on the Most Recently Used tab. We will discuss the User Defined Connection tab in a future exercise. 4. Choose the ASCII (Delimited) connector and click OK.
5. To the right of the long box next to Source File/URI, click the down arrow. This will open a Select File dialog browser. We want to choose Accounts.txt in the C:\Cosmos_Work\Fundamentals\Data folder. 6. In the ASCII (Delimited) Properties box on the right side of the Source tab find the Header property and set it to True. Then click Apply. Any time you make a change in the source or target properties, you will have to click Apply 7. Use the toolbar Icon to open the Source Data Browser. If you see data there, then that confirms that youve connecter to your source. 8. Close the browser and click on the Target Connection. Now well be defining our target connection. Note that the Target Connection Tab is very similar to the Source Connection Tab 9. In steps two through four above we chose a source connector. Follow those steps again, using Target instead of Source where appropriate. This time well chose ACSII (Fixed). 10. In the Target File/URI drop down browse to the C:\Cosmos_Work\Fundamentals\Data folder. Type in Accounts_Fixed.txt. Then click Open. Were now connected to our target. 11. Now Click on the Map tab. 12. If you see four quadrants on this page, then your are set to the Power Map View and youll need to follow the next step. If not, you can skip to step 16. 13. From the View menu on the menu bar, choose Preferences. Click on the General tab. Un-check where it says Always show power map view.
We will be working in the Power Map view later in the course, but for now, we will use the Simple Map View. 28 Pervasive Integration Platform Training - End User
14. Click on the Simple Map View icon in the toolbar. 15. You may have to drag the asterisk from the box next to All Fields in the source and drop it under the Target field name header. 16. Notice that the target has been filled out with fields identical to the source, and that the Target Field Expressions are filled out as well. Validate the Transformation using the check mark icon on the toolbar. 17. You should get a pop up box that says something like Map1.map.xml is valid. Click OK. 18. Save the Map as DefaultMap in the C:\Cosmos_Work\Fundamentals\Development folder. Then click the Run Map Icon. 19. Click the Target Data Browser and note your results. There follows some information taken from reports generated by Repository Manager from the DefaultMap transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)Accounts.txt
outputmode Replace
Map Expressions
R1.Account Number R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.Birth Date R1.Favorites R1.Standard Payment R1.Payments R1.Balance
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") Records("R1").Fields("Birth Date") Records("R1").Fields("Favorites") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Factory Connections
Objectives: At the end of this lesson you will be able to find and use the appropriate data access Connector. Keywords: Connectors List, Connection Menu, and Source Connection tab Description Factory Connections contains a list of all of the Connectors available to you in Map Designer. Type the first letter of a Connector name to jump to that Connector in the list (or the first one in the list with that letter). For instance, you want to choose Btrieve v7. Type "B", and BAF will appear. From there, you can scroll down to Btrieve v7 and select it.
The Map Designer Connector Toolbar are the icons and their descriptions:
New - Allows you to clear the Source tab and define a new source connection. Open Source Connection Allows you to open the Select Connection dialog to access the:
o o o
Most Recently Used Tab Factory Connections Tab User Defined Connections Tab
You can elect to Save your Source Connection as a sc.xml file. The advantage of doing this is that you can reuse the Connection in any subsequent Map design in the future. The sc.xml file saves the Source Connector and any properties you have set. Source Connector Properties - opens the Source Properties dialog box. These are the same properties available via the Source Connection tab, and are dependent upon the Connector to which you are connected. This icon will be active only when you are on the Map tab.
6. Select the User-Defined connection you created previously and you are ready to move to the next exercise.
Macro Definitions
Objectives At the end of this lesson you will be able to define and use Macros in connection strings. Keywords: Macro Definition, Workspace Description We will create a new macro that we can use to represent the Data sub-directory for our Training Repository. This will allow us to port the schema files more readily from one workstation to another or deploy to servers for execution by Integration Engine. Exercise From within the Transformation Map Designer: 1. Select the menu item Tools>Define Macros. Notice there is already a macro that is set to the default location of the current Workspace. 2. Click New. 3. Enter a Macro Name value as funData. 4. Click the Macro Value drop-down button and navigate to our workspace and highlight the C:\Cosmos_Work\Fundamentals\Data folder. 5. Click OK. 6. Add a slash \ to the end of the macro value. 7. Enter a description if you wish and click OK. 8. Now we can use the syntax $(funData) to represent the entire path to the Data folder. 9. Highlight the portion of the connection string you wish to replace. 10. From the menu bar, select Tools>Paste Macro String. 11. Click on the row of the Macro you want to use (e.g., Data)
Root Macro If you will be selecting files from the same directory, or parent directory often you can set the Root Macro for automatic substitution. Highlight the Macro you want to use as the root directory and click the Set as Root button.
Also, be sure to set the automatic substitution switch in Map Designer > View > Preferences > Directory Paths:
4. On Sort Options Tab, click in the Key Expression box to see the down arrow. Click on the down arrow. Choose the State field to use as a key. (Note that you could choose 38 Pervasive Integration Platform Training - End User
Build if you want to build a key using an expression to parse out or concatenate parts of different fields. Note also that The sort will default to Ascending order. If you would prefer to sort in descending order, click on the down arrow and select "Descending" from the dropdown list.) 5. Create a target connection to an Ascii Delimited file called AccountsSortedbyState.txt. This file doesnt yet exist, so youll have to type in the file name. 6. Set header to true and apply. 7. Go to the Map Step. 8. Validate the Map.
You may see a dialog box that looks like this. We will go into greater detail on the Default Event Handler and Event Handlers in general later in this courseware.
9. Click OK to accept the Default Event Handler. 10. Save this Map as SourceDataFeatures_Sort in the Development folder. 11. Run the Map.
12. Notice Results in Status bar. 13. Open the Target Data Browser and notice that the records are now sorted by state. There follows some information taken from reports generated by Repository Manager from the SourceDataFeatures_Sort transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)Accounts.txt
TargetOptions
header True
outputmode Replace
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.Account Number R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email
4. Note the radio buttons in the bottom of the window where it says Define Source Sample. We could choose a range of records. We could choose to process ever Nth record from the source. (The behaviour of this is that you always get the first record, then every Nth record like so, 1, N+1, 2N+1, 3N+1) 5. In this case, we want our filter to bring in only the records from Texas. We will use the Source Record Filtering Expressions box. This allows us in RIFL Scripting Language (see The RIFL Script Editor chapter) to enter a statement that will evaluate to True or False. We will process the records that evaluate to true. In the Source Record Filtering Expressions box, lets type. Records("R1").Fields("State") == "TX" . 6. Create a target connection to an Ascii Delimited file called AccountsinTX.txt. This file doesnt yet exist, so youll have to type in the file name. 7. Set header to true and apply. 8. Go to the Map Step. 9. Validate the Map.
You may see a dialog box that looks like this. We will go into greater detail on the Default Event Handler and Event Handlers in general later in this courseware.
10. Click OK to accept the Default Event Handler. 11. Save this Map as SourceDataFeatures_Filter in the Development folder. 12. Run the Map.
13. Notice Results in Status bar. 14. Open the Target Data Browser and notice that there are only records from Texas. There follows some information taken from reports generated by Repository Manager from the SourceDataFeatures_Filter transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)Accounts.txt
TargetOptions
header True
outputmode Replace
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.Account Number R1.Name R1.Company R1.Street R1.City
R1.State R1.Zip R1.Email R1.Birth Date R1.Favorites R1.Standard Payment R1.Payments R1.Balance
Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") Records("R1").Fields("Birth Date") Records("R1").Fields("Favorites") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
6. We can then choose Account Number (note the space that is not there in the target field. Thats why Match by Name failed). 7. Now we do the same for each of the remaining fields. Look at the charts below for specific mapping if needed. 8. Alternatively we could have done a right click in the AccountNumber Target Field Expression and clicked on Match by Position. In this case, we would have mapped all of our source fields into the target fields correctly. That will not always be the case, however. 9. Click the Run button. 10. Accept the Default Event Handler 11. Notice Results in the Target Data Browser. Note the number of records in the table. 12. Now lets go back to the Target Connection Tab and set the OutPut Mode to Append. 13. Click the Run button. 14. Notice Results in the Target Data Browser. Note the number of records in the table. 15. Now change the Output Mode to Clear File/Table contents and Append. 16. Run the map and note the results. 17. Save this map as OutputModes_Clear_Append There follows some information taken from reports generated by Repository Manager from the OutputModes_Clear_Append transformation in the Solutions folder (Note that there are also reports for the OutputModes_Replace and OutputModes_Append maps, but they are identical except for output mode): Source (ASCII (Delimited)) location $(funData)Accounts.txt
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites R1.StandardPayment R1.LastPayment R1.Balance
Fields("Account Number") Fields("Name") Fields("Company") Fields("Street") Fields("City") Fields("State") Fields("Zip") Fields("Email") Fields("Birth Date") Fields("Favorites") Fields("Standard Payment") Fields("Payments") Fields("Balance")
8. Click the Run button. 9. Notice Results in the Target Data Browser. Note the number of records in the table. 10. Be aware that you will only see results the first time you run the Map. This is because we will remove the matching records the first time and they will no longer exist. You will need to load the original source records into the target table before you run the Delete Mode map a second time. Assuming you correctly ran the previous Map in Clear and Append mode, you can run it again to prime the table. There follows some information taken from reports generated by Repository Manager from the OutputModes_Delete transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)InactiveAccounts.txt
Outputmode Delete
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumber
Fields("Account Number")
already existed, our output mode was automatically set to Append. Lets set it to Update. 4. Go to the Map Step. Note that in this case we already have target fields defined. This is metadata (Field names, Field lengths, Datatypes) that is coming in from the database. Notice also that some fields are mapped and some are not. The Simple Map view does an automatic Match by name that pulls in field names that are exact matches from source to target. We will have to do the rest by hand. 5. For the AccountNumber field we click inside the target field expression, then click the down arrow. 6. We can then choose Account Number (note the space that is not there in the target field. Thats why Match by Name failed). 7. Now we do the same for each of the remaining fields. Look at the charts below for specific mapping if needed. 8. Alternatively we could have done a right click in the AccountNumber Target Field Expression and clicked on Match by Position. In this case, we would have mapped all of our source fields into the target fields correctly. That will not always be the case, however. 9. Note that AccountNumber was automatically set as our key field. 10. Open the Target Keys, Indexes and Options dialog box. Note all the options that are possible using Update Mode. In this case the defaults, Update all matching records and insert non-matching records and Update only mapped fields will take care of us. Although the Update All fields would give us the same results as we have mapped all fields. 11. Click the Run button. 12. Accept the Default Event Handler 13. Notice Results in the Target Data Browser. Note the number of records in the table. 14. When we run this map we will be updating the records, so unless you restore the table to its original contents before you run the map again, you wont see any change. You can just run the map we created for the Clear and Append mode and then run the Delete mode map before re-running this one. There follows some information taken from reports generated by Repository Manager from the OutputModes_Delete transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)AccountsUpdate.txt
Update AccountNumber
Update Mode Options Update ALL matching records and insert non-matching records.
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites R1.StandardPayment
Fields("Account Number") Fields("Name") Fields("Company") Fields("Street") Fields("City") Fields("State") Fields("Zip") Fields("Email") Fields("Birth Date") Fields("Favorites") Fields("Standard Payment")
R1.LastPayment R1.Balance
Fields("Payments") Fields("Balance")
7. In the lower right of the Field Mapping Wizard well Choose Matching Source Field by clicking the down arrow. Choose Birth Date and click next. 8. Choose the Needs additional TRANSFORMATION radio button and click next. 9. From the Transformation Function List dropdown choose Datevalmask. (You can save time scrolling through this list by clicking the first letter of the function you want on the keyboard.) Click next. 10. Click the ellipsis in the DateString area.
11. The RIFL Script Editor pops up. In the lower left pane click on Source R1. Then in the lower right pane double click Birth Date. Click OK. 12. In the Mask area, type in mm/dd/yyyy. Then click OK. Then Next on the wizard and OK on the pop up. Masks are used in many RIFL functions. The only way to know what values to put into those masks is to look in the Help files. Just open the Help files and use the index to find the particular RIFL function you may be using. The next field well work with is the Name field. The source data names are in this format, First Middle Last. A sample from the first record is George P Schell. In this Map we want our target names in this format, Last, First Middle Initial. Like this: Schell, George P. In the previous part of this exercise we used the Field Mapping Wizard. Now well go directly to the RIFL Script Editor. 13. Left click in the Name field. Then left click on the ellipsis. This bypasses the wizard and goes straight to the RIFL Script Editor. If there is not an ellipsis, there will be a drop down arrow. Click that and choose Build Expression.
14. On the toolbar on the top click the icon on the far right Hide Expression Tree. This gives us more room in the Editor window. 15. Delete the Fields(Name) value or any other value in the Editor pane so that its blank. 16. In the lower right pane scroll down to the NamePart function. Do a single left click on it. Notice that there is a short description of the function in the lowest right portion of the RIFL Script Editor. There is also the syntax of the function with descriptive placeholders for the parameters in the lowest left. 17. Double click the NamePart function.
18. In the editor window select the Mask parameter. Type in l. Thats a lower case L in double quotes. 19. Select the Name parameter. Pull in the source field Name as we did above for the Birth Date. (See step 11.) 20. If we left this function as is, it would return the last name from the source data. We need more than that, though. Lets type or click the concatenation icon ampersand after our function. to put an
21. We can see that the RIFL Script Editor will do a lot of the work for us. Lets use what weve learned to finish this script: NamePart(l, Records(R1).Fields(Name)) & , & _ NamePart(f, Records(R1).Fields(Name)) & & _ NamePart(mi, Records(R1).Fields(Name)) For logic purposes this script would need to be all one line. We use the space and the underscore characters as a continuation that allows us to move to the next line to make the script easier to read. 22. Close the RIFL Script Editor and save this Map as RIFL_ScriptFunctions in the development folder. 23. Run the Map and note the results. There follows some information taken from reports generated by Repository Manager from the RIFLScript_Functions transformation in the Solutions folder: Source (ASCII (Delimited)) location $(funData)Accounts.txt
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumber R1.Name
Fields("Account Number") NamePart("l", Records("R1").Fields("Name")) & ", " & _ NamePart("f", Records("R1").Fields("Name")) & " " & _ NamePart("mi", Records("R1").Fields("Name")) Fields("Company") Fields("Street") Fields("City") Fields("State") Fields("Zip") Fields("Email") datevalmask(Fields("Birth Date"), "mm/dd/yyyy") Fields("Favorites") Fields("Standard Payment") Fields("Payments") Fields("Balance")
R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites R1.StandardPayment R1.LastPayment R1.Balance
Notice that the RIFL Script Editor puts the syntax for the If Then Else Statement into the editor window for us. We would replace condition with a statement that would evaluate to true or false. statement block one would be what we do if the statement is true. Then statement block two is what we do if the statement is false. 6. Lets now enter the following script, replacing what we have in the editor.
Dim A A = Records("R1").Fields("Birth Date") If Isdate(A) then datevalmask(A, "mm/dd/yyyy") Else Logmessage("Warn", "Account Number " & Records("R1").Fields("Account Number") & _ " has an invalid date: " & A) Discard() End if
Line 1 declares a local variable A that will be available to us only in this script. Line 2 sets that A variable to the value contained in the Birth Date field in the source. Line 4 uses the IsDate RIFL function to test the incoming string value to see if it can be converted to a valid date. Line 5 converts that date for use in the target. Lines 7 and 8 are a LogMessage function. Note the continuation characters at the end of line 7. The first parameter of a LogMessage function is always either Info, Warn, Error, or Debug. The second parameter is whatever you want to write to your log file. In this case we have a combination of literal strings and data coming from the source. Line 9 is the Discard function that causes this source record not to be written to the target. 7. Lets click the Validate icon. We should see Expression contains no syntax errors. At the bottom of the RIFL Script Editor. Click OK. 8. Validate and save this map as RIFLScript_FlowControl. 9. Note results in the target. Note only 201 records in the target. 10. Click on the View TransformMap.log functions. icon. Note the results of our LogMessage
There follows some information taken from reports generated by Repository Manager from the RIFLScript_FlowControl transformation in the Solutions folder:
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumb er R1.Name R1.Company R1.Street R1.City R1.State R1.Zip
R1.Email R1.BirthDate
Fields("Email") Dim A A = Records("R1").Fields("Birth Date") If Isdate(A) then datevalmask(A, "mm/dd/yyyy") Else Logmessage("Warn", "Account Number " & Records("R1").Fields("Account Number") & _ " has an invalid date: " & Records("R1").Fields("Birth Date")) Discard() End If Fields("Favorites") Fields("Standard Payment") Fields("Payments") Fields("Balance")
You can affect many areas or the Transformation Map including log file settings, runtime execution properties, error handling and define external code-modules.
Exercise 1. Using the previous Map, change the Discard() function call to a Reject() function call. 2. Go to the Map Properties dialog and click Build Connection String from Source. 3. Change the file name portion of the connect string to read BadDateRejects.txt. 4. Using the Target Event Handler OnReject, set a ClearMapPut action and change its Target parameter to Reject. 5. Click the Run button (Play button in toolbar). 6. Note the results in the Target Data Browser. 7. Use the Data Browser to examine the BadDateRejects.txt file. There follows some information taken from reports generated by Repository Manager from the Reject_Connect_Info transformation in the Solutions folder:
OnReject
ClearMapPut Record
Reject R1 false
Source R1 Events AfterEveryRecord ClearMapPut Record target name record layout Target R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street
Fields("City") Fields("State") Fields("Zip") Fields("Email") Dim A A = Records("R1").Fields("Birth Date") If Isdate(A) then Datevalmask(A, "mm/dd/yyyy") Else Logmessage("Warn", "Account Number " & Records("R1").Fields("Account Number") & _ " has an invalid date: " & Records("R1").Fields("Birth Date")) Reject() End If Fields("Favorites") Fields("Standard Payment") Fields("Payments") Fields("Balance")
you do not use any events yourself, add one event action. The event that it uses is the AfterEveryRecord event for the source file, and the action that it supplies is the ClearMapPut action for the target file. So, if you do nothing, your transformation will automatically read every source record and, for each one, clear the target buffer, execute all of your mapping expressions and then write the target buffer contents to the target file. This event and its associated action are collectively referred to as the Default Event Handler. When the Map Designer supplies this default event handler, you are informed via an on-screen message box. However, the Map Designer supplies the default event handler ONLY if you do not, yourself, set up and use any event handlers. If you do, then the Map Designer WILL NOT ADD the default event handler to those that you set up. (The Map Designer will, however, warn you when you are about to run a transformation that has no event action that will cause a target record to be written.) Some Representative Events Some events are very basic and are used frequently. Most of these events will be discussed and used in the exercises in this course module. You should be aware of these events and when they occur. These events are: BeforeTransformation This is the first event that occurs in any transformation, and is very useful for all the housekeeping and set-up tasks that you may wish to perform. After Transformation This is the last event that occurs before a transformation ends, and it is very useful for accessing final totals and other values, and performing housekeeping and clean-up tasks. Specific AfterEveryRecord The word specific refers to an event that is tied to a particular source or target record type. This event occurs whenever a source record of a specific type is read, and is the ideal place to perform the action you want to do using the values from each source record. Specific AfterFirstRecord This event only occurs when the first record of a specific type is read, and it is the ideal event in which to perform housekeeping and set-up tasks that relate to a single record type. General AfterFirstRecord The word general refers to an event that is not tied to a particular source or target record type. This particular event occurs only when the first record is read from the source file and is again a great place to perform general housekeeping and set-up tasks that relate to all record types. General AfterEveryRecord This event occurs whenever a source record is read from the source file- no matter what type it may be. It is the best place to put common tasks- those that will apply to all source records. Some Representative Actions There are many actions that you can perform whenever a particular event occurs. Some events are used very often and are common to many events. The two most common, and the two that we will use most often in the exercises in this course are: ClearMapPut This action does three things. First, it clears the target buffer (for the record type specified in its Layout parameter). Next, it executes all the mapping expressions that you have supplied for each 71 Pervasive Integration Platform Training - End User
field in the target buffer, in effect filling the target buffer fields with the data you want. Finally, it writes the contents of that buffer to the target file. Execute This action executes a script created with the RIFL Script Editor. The scripts you write and execute perform the work of your transformation.
Most of our exercises make some attempt to mimic a real world situation in a simplified fashion to get the concept across. This exercise, however, is pure classroom. What were doing here is setting up a global variable to hold a value. Then as we enter each event, well use an Execute action to give that variable the name of the event. Then well write a target record. When the Map has run, our target will show the order in which the events fired.
Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the Events_SequenceTest transformation in the Solutions folder: Source(Null)
SourceOptions
Record count 5
location
$(funData)EventNames.txt
outputmode
Replace
Variables
Name eventName Type Variant Public no Value ""
BeforeTransformation
expression eventName = "Before Transformation"
Execute
Target R1 false
AfterTransformation
expression eventName = "After Transformation"
Execute
Source R1 Events
AfterEveryRecord
expression
Execute
AfterEveryRecord
ClearMapPut Record
Target R1 false
Source Events
AfterEveryRecord
expression
Execute
AfterEveryRecord
ClearMapPut Record
Target R1
buffered
false
BeforeFirstRecord
expression
Execute
BeforeFirstRecord
ClearMapPut Record
Target R1 false
OnEOF
Execute
eventName = "General OnEOF"
expression
OnEOF
ClearMapPut Record
target name Target R1 false
Map Expressions
R1.RecordNumber
Fields("Record Number")
R1.EventName
eventName
SourceOptions
header True
Source R1 Events
AfterEveryRecord
target name record layout count Target
ClearMapPut Record
R1
Dim A A = Records("R1").Fields("Birth Date") ' Use flow control to test for a valid date If Isdate(A) Then ' Enable the Put action by setting to one 1 Else ' Invalid date, log a message Logmessage("Error", "Account number: " & Records("R1").Fields("Account Number") & _ " Invalid date: " & Records("R1").Fields("Birth Date")) ' Increment counter myBadDates = myBadDates + 1 ' Suppress the Put action by setting to zero 0 End If
buffered
false
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites R1.StandardPayment R1.LastPayment R1.Balance
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") datevalmask(Records("R1").Fields("Birth Date"),"mm/dd/yyyy") Records("R1").Fields("Favorites") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the Events_OnDataChange transformation in the Solutions folder:
Variables
Name varState varCounter varBalance Type Variant Variant Variant Public no no no Value "" 0 0
SourceOptions
header True
Sort Fields
Fields("State") type=Text, ascending=yes, length=2
TargetOptions
Outputmode Replace
Source R1 Events
AfterEveryRecord
expression
Execute
' Set the state value for the current record because it will be different "OnDataChange" varState = Records("R1").Fields("State") ' Increment the counter for the number or records within this block varCounter = varCounter + 1 ' Accumlate the balance for the records within this block varBalance = varBalance + Records("R1").Fields("Balance")
OnDataChange1
ClearMapPut Record
Target R1 false
OnDataChange1
expression
Execute
' Reset these vars for next block of records varCounter = 0 varBalance = 0
Record R1
Name State Number_of_Accounts Total_Balance_of_Accounts Total Type Text Text Text Length 16 16 16 48 Description
Map Expressions
R1.State R1.Number_of_Accounts R1.Total_Balance_of_Accounts
Another interesting thing about this exercise, is that we will write a file that is not our target that will contain the values of the payments and balances that cause the error. Well do this with the FileAppend functionality. Well also see other file manipulation functions. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. 3. Also observe the DividebyZero.txt file that was created in the Data folder. There follows some information taken from reports generated by Repository Manager from the ErrorHandling_OnError_Event transformation in the Solutions folder:
Variables
Name flagFirstTime Type Variant Public no Value 0
SourceOptions
header True
TargetOptions
header True
Outputmode Replace
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Source Events
BeforeFirstRecord
expression
Execute
Dim A A = MacroExpand("$(funData)") If FileExists(A & "DivideByZero.txt") Then FileDelete(A & "DivideByZero.txt") End If /* This example shows the functionality of the MacroExpand. FileExists and FileDelete functions, though similar results could be had by using : FileWrite("$(Data)DivideByZero.txt", "AcctNumber" & sep & "Payt" & sep & "Bal" & crlf) This would replace any existing file with a file that contains only the header. This would also make the flagFirstTime variable unnecessary. */
Target Events
OnError
expression
Execute
Dim sep, crlf sep = "|" crlf = Chr(13) & Chr(10) If flagFirstTime = 0 Then FileAppend("$(funData)DivideByZero.txt", "AcctNumber" & sep & "Payt" & sep & "Bal" & crlf) ' set flag to 1 so header will not be written next time flagFirstTime = 1 End If
FileAppend("$(funData)DivideByZero.txt", Records("R1").Fields("Account Number") & sep & _ Records("R1").Fields("Payments") & sep & _ Records("R1").Fields("Balance") & crlf)
OnError
Resume
Map Expressions
R1.Account Number R1.Payments R1.Balance R1.MonthsToGo
Records("R1").Fields("Account Number") Records("R1").Fields("Payments") Records("R1").Fields("Balance") Dim A, B A = Records("R1").Fields("Payments") B = Records("R1").Fields("Balance") If Int(B/A) == B/A then B/A Else Int(B/A) + 1 End if
Comprehensive Review
Put together everything you have learned so far. Workshop Exercise To test our knowledge and review the introductory module for the Cosmos Integration Essentials courses we want to design a Map to load the Accounts.txt file into a target database. Basic Map specifications: Source Connector: ASCII (Delimited) Source File: Accounts.txt Header property: True Target Connector: ODBC 3.x Data Name Source: TRAININGDB Table: tblIllini Output Mode: Replace Table Exercise 1. Map the four target fields with appropriate data from the source. 2. Reject all records from the state of Illinois into an ASCII Delimited file called Reject_Accounts.txt. 3. Use an appropriate Date/Time function to convert the formatted date strings into a real date-time data type. 4. Test for invalid dates using the IsDate function and reject those records as well. 5. Aggregate the Balances from all rejected records using a global variable. 6. Report the aggregated balance in the log file using the LogMessage function. There is a map that does all of this in the Solutions folder. Its called Comprehensive_Review1. Open it and look only if you get stuck. It should be noted that the solution map shows only one way to do this. There are several.
ACCTNUM NAME COMPANY STREET CITY STATE POSTCODE EMAIL BIRTHDATE FAVORITES STDPAYT LASTPAYT BALANCE
Display Display Display Display Display Display Display Display Display Display Display sign leading Display sign leading Display sign leading
9 21 31 35 16 2 10 25 10 11 6 6 6
Exercise 1. Start a New Structured Schema Design 2. Click the Visual Parser button (red knife) 3. Change the Code Page property to 37 US EBCDIC (click the Apply button!) 4. Navigate to the file named Accounts_Binary.bin 5. Determine the record length by looking for patterns in the file 6. Overtype the Length and hit Enter key (try 180, what happens?) 7. After you have the columns lined up, parse the fields, select data types and field properties until you have defined the structure. 8. Save the Structured Schema as BinaryDataCodePages.ss.xml for reuse Record Layouts
Record R1
Name AccountNumber Name Type Text Text Length 9 21
Company Address City State ZipCode Email BirthDate Favorites StandardPayment Payments Balance Total
Text Text Text Text Text Text Date Text Packed decimal Packed decimal Packed decimal
31 35 16 2 10 25 4 11 6 7 6 183
There follows some information taken from reports generated by Repository Manager from the Reusing_Structured_Schema transformation in the Solutions folder: Source (Binary)
location $(funData)Accounts_Binary.bin
SourceOptions
codepage 0037 US (EBCDIC)
Structured Schema
TargetOptions
header True
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Address
You have record layout definitions available in a printed document: Select the connector type in the SSD Use the Grid layout to define each record type and its fields Use the ALL Record Type Rules>Recognition dialog to define at least one rule for each record type You have no definitions available- only the data file: Activate the SSD Visual Parser for your file Name and parse each record type Find and select the discriminator field Use the Recognition Rules button to activate the Recognition Rules dialog and define at least one rule for each record type The common element to these strategies is the definition of the recognition rules. These are defined in the Recognition Rules dialog, which is activated from either the ALL Record Type Rules>Recognition hierarchy item or the individual R1 Rules>R1 Recognition items on the grid layout in the SSD. First, youll identify the discriminator- the field whose contents will be used to tell the record types apart. Next, you can use the Generate Rules button to automatically generate some skeleton rules for each record type. Finally, you can add the actual value that the discriminator field will contain for each record type (and adjust other properties of the rules as you wish). When youre done, the structured schema for the file can be saved. Scenario Weve been given a source file (Payments_MultiRecType.txt) that contains multiple record types but we have not been given any information about the file, its records or its fields. We do know that there are payment records and a total record, and that the payment records are supposed to contain an account number, payment date and payment amount. The total record is supposed to contain a file date and file total, but we dont know where in the record each field is. We need to define a structured schema for this file. Exercise 1. Begin a new Map Design. 2. Point the source to the ASCII Fixed file Payments_MultiRecType.txt. 3. Browse the source file and determine whether record types exist. Close the browser. 4. Click the Build Schema... button for the Structured Schema. 5. Click the Parse Data button. 6. Rename the Record to Payment and parse a payment record.
Record Payment
Name RecordIndicator Type Text Length 1
9 8 11 29
7. 8. 9.
Click the Add Record button and add the CheckSum record type. Scroll down until you find the next different structured record (row 30). Parse that record type with its fields.
Record CheckSum
Name RecordIndicator EmptiedDate Action TotalAmount PaymentCount ClerkID Total Type Text Text Text Text Text Text Length 1 8 3 9 4 4 64
10. Select the Payment record from the Record dropdown and ensure that the RecordIndicator field is displayed in the Field Name box. 11. Check the Discriminator check box. 12. Click the Recognition Rules... button. 13. Click the Generate Rules button. 14. Define PaymentRule1 to be that the discriminator field equals P (Note that quotation marks are not used in the Value box). 15. Define CheckSumRule1 to be that the discriminator field must be equal to E. 16. Return to the Structured Schema Designer dialog. 17. Save the structured schema as Payments_MultiRecType.ss.xml. 18. Close the Structured Schema Designer. 19. Browse the source file again and note how the structured schema information has been applied to it. Look at both kinds of records and see how the browser changes. 102 Pervasive Integration Platform Training - End User
Conflict Resolution
Objectives At the end of this lesson you should be able to use a Structured Schema to set up a Map that uses one source record type to verify the data in the other record type. Keywords: Schema Mismatch Handling, Record Specific Event Handlers, and Validation Description Our newly defined payment file structure allows us more robust data validation opportunities as we load the Payments table because we have some checksum values on which we can evaluate data. We are not going to use the additional Clerk fields in our Payments table but we will take the opportunity to refine the Payments table structure and modify our Transformation Map that loads it. The additional record layout in our payments file has data that allows us to evaluate aggregated data with checksum values. We can make use of the record specific Event Handlers to perform the evaluations at the appropriate time. Default Event Actions - Multiple Record Types The Map Designer no longer sets the default Event Handler for you. Once you have specified any other event actions OR you have a map with multiple record types, you must define the actions yourself. Exercise 1. Build this map based on the specifications in the reports below. There follows some information taken from reports generated by Repository Manager from the Multi_Rec_Payment_Validation transformation in the Solutions folder:
Structured Schema
Payments_MultiRecType.ss.xml
table
tblPaymentsVerified
Variables
Name paymentCounter paymentSubtotal Type Variant Variant Public no no Value
Payment Events
AfterEveryRecord
expression
Execute
AfterEveryRecord
ClearMapPut Record
Target R1 false
CheckSum Events
AfterEveryRecord
expression
Execute
'This code can be imported by the menu, File > Open Script File > ChecksumTest.rifl ' declare temp variables used for better readability Dim CRLF, realTotal, realCount CRLF = Chr(13)&Chr(10) realTotal = Records("CheckSum").Fields("TotalAmount")
realCount = Records("CheckSum").Fields("PaymentCount") ' display current count and payment sub-total for each clerk MsgBox("---New Checksum---" & CRLF & _ "PaymentCounter= " & paymentCounter & " : Should be = " & realCount & CRLF & _ "Paymt Amt= " & paymentSubtotal & " : Should be = " & realTotal) ' evaluate count and sub-total for inconsistencies If paymentSubtotal <> Trim(realTotal) Then MsgBox("Total payment amount for this clerk does not match checksum amount!!!", 48) End If If paymentCounter <> Trim(realCount) Then MsgBox("Payment Count for this clerk does not match checksum amount!!!", 48) End If ' reset global variables for next clerk paymentCounter = 0 paymentSubtotal = 0
Map Expressions
R1.AccountNumber R1.PaymentDate
R1.PaymentAmount
IntegrationArchitect_ExtractSchemaDesigner.ppt
Keywords: Extract Schema Designer Mechanics: Line Styles, Fields, Accept Record, Automatic Parsing
Description The first file that we will be parsing is Purchases_Phone.txt. We should take a look at it first in a text viewer. Although it might be possible to use this report file as a direct input for a transformation, we would have to define it as a multiple-record-type file. With so many record types and so much processing involved with them, writing the transformation would be time consuming. So what we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. We dont even have to have a twostep procedure or read the report file twice. Once the extract schema is defined, we can create a transformation, specify the report file as the Source, and apply the Extract Schema to it. The file will then be presented to the transformation in simple rows and columns- complete with headers. Exercise Start Extract Schema Designer. 1. From the Repository Explorer, select New Object >Extract Schema. 2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Phone.txt. 3. Choose OK to accept the Source Options defaults. 4. Highlight the word Category on one of the Category lines and right-click in the highlight. 5. Select Define Line Style>New Line Style. 6. Verify that all defaults are acceptable and click Add. Weve now defined a Line Style for the Category field. 7. Highlight the Category code on one of the Category lines and right-click in the highlight. 8. Select Define Data Field>New Data Field. 9. Change the field name to Category. 10. Verify that all other defaults are acceptable and click Add. Weve now defined the Category Data Field. 11. Highlight a ProductNumber and the rest of the spaces on the line and right-click in the highlight. 12. Select Define Data Field>New Data Field. 13. Change the field name to ProductNumber. 14. Verify that all other defaults are acceptable and click Add. 15. Highlight a Quantity and all but one of the spaces between the actual digits of the Quantity and the colon following the literal Quantity (if any).
16. Right-click in the highlight and select Define Data Field>New Data Field. 17. Change the field name to Quantity. 18. Verify that all other defaults are acceptable and click Add. Now lets ensure that Source Options will allow parsing: 20. Select Source>Options from the Menu bar. 21. On the Extract Design Choices tab, look in the Tag Separator dropdown to see if there is a character sequence that matches the sequences used in your data to separate Line Style tags from actual data. If there is, select it. If there is not, then automatic parsing is not available. Also on this tab, ensure that the Trim Leading and Trailing Spaces checkbox is selected. 22. On the Display Choices tab, ensure that the Pad Lines checkbox is selected. 23. Click OK to accept the selections. Now lets define the UnitCost Line Style and Data Field simultaneously. 24. Highlight an entire UnitCost line in the data and right-click in the highlight. 25. Select Define Data Field>Parse Tagged Data. NOTE: When Line Styles and Fields are defined in this way, the default name for the Field is exactly the same as that for the Line Style, so no change to the field name is usually necessary. If a change is desired, however, point your cursor to the actual field data in the display and double-click on the data. This will bring up the Field Definition dialog box and you can change the name (or other characteristics) here. Now well define the TotalCost and ShipmentMethodCode Line Styles and Data Fields simultaneously. 26. Highlight an entire TotalCost line and ShipmentMethodCode line in the data. 27. Right-click in the highlight and select Define Data Field>Parse Tagged Data. The next thing is to define the Line Style that determines the end of a row of data for the Extract File. 28. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, ShipmentMethodCode). 29. Double-click on the Line Style name to bring up the Line Style Definition dialog. 30. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults. 31. Click Update. Test the Extract to ensure that your definitions are correct. 32. Click on the Browse Data Record button. 33. Choose OK to allow assignment of all Fields to the Extract File. 34. Examine the data to ensure that your Field definitions are correct. 35. Close the browser window. 36. Use the Parse Tagged Data functionality to define the Account Number, Purchase Order Number and PODate fields.
37. Double-click on a Purchase Order Number to access the Field Definition dialog. Note: The options at this tab determine how the Extract Schema Designer will process the data in this particular field from record to record. The use of these options makes a distinction between the data fields and the contents of those fields. When the Extract Schema Designer is collecting data fields, it collects all the fields that have been defined on lines of text whose line action is either COLLECT Fields or ACCEPT Record and assembles those fields into a data record. The options at this tab determine how data within a data field is handled.
38. On the Data Collection/Output tab, ensure that Propagate Field Contents has been selected. 39. Double-click on a PODate to access the Field Definition dialog. 40. On the Data Collection/Output tab, select Flush Field Contents. 41. Click Update. 42. Click on the Browse Data Record button. 43. Choose OK to allow assignment of all Fields to the Extract File. 44. Examine the data to see the effect of Propagate versus Flush. 45. Close the browser window. 46. Redefine the PODate field to propagate it as well. 47. Browse the data record again to ensure the data is being propagated. NOTE: In this case we do want the data to propagate, but you will need to decide which behavior you want for any situation. We can specify an order for the columns in your Extract File rows (if desired). 48. Choose Field>Export Field Layout from the menu bar. 49. To reposition a column, left-click and drag a column name up or down in the list, dropping it on top of another column name. 110 Pervasive Integration Platform Training - End User
NOTE: When you drag upward, the column you are dragging will be placed before the column on which you drop it. When you drag downward, the column you are dragging will be placed after the column on which you drop it. 50. Put the six columns in the order they appear in the source file. 51. 52. 53. 54. 55. Click OK. Exclude columns from the Extract File rows (if desired). Select Record>Edit Accept Record from the menu bar. Clear the check boxes for the columns that you do not wish to appear in the Extract File. Click Update.
Save the Extract Schema Definition: If the Extract Schema Definition has already been saved before, click the Save Extract button to save it again under the same name. You may also choose File>Save Extract to perform the same function. If the Extract Schema Definition has not yet been saved, click the Save Extract button. In the Save dialog, supply the name PhonePurchases.cxl and verify the location where the Definition will be stored (changing it if necessary). You may also choose File>Save Extract to perform the same function. If the Extract Schema Definition has been saved before, but you have modified it and want to save it as a different Definition, then choose File>Save Extract As. In the Save dialog, supply a name for the Definition and verify or supply the save location. Close the Extract Schema Designer 56. Open Map Designer and establish a source connection based on the information below. 57. Open the Source Data Browser and note the results. Note that this source could now be used in the same way that any other source would be in a transformation. 58. Close Map Designer without saving. Source (Extract Schema Designer's Connector)
location $(funData)Purchases_Phone.txt
Schema File
C:\Cosmos_Work\Fundamentals\Solutions\ programfile Extract_Schema_Designer\InterfaceFundamentals.cxl
18. Double-click on the Order_File_Creator Line Style to change its name (if desired). 19. Double-click on the actual email address to open the Field Definition dialog. 20. Change the Field Name to OrderFileCreatorEmailAddress. 21. Click Update. 22. Use the Browse Data Record button to view the results. 23. Close the browser then Double-click on the Order_File_Creator Line Style name to open the Line Style Definition dialog. 24. On the Line Action tab, change the action to ACCEPT Record. 25. Click Update. 26. Choose Record>Edit Accept Record from the menu bar. 27. Choose Order_File_Creator for the Current Accept Record. 28. Select the OrderFileCreatorEmailAddress checkbox. 29. Choose ShipmentMethodCode for the Current Accept Record. 30. De-select the OrderFileCreatorEmailAddress checkbox. 31. Click Update. 32. Use the Browse Data Record button to view the results. 33. Save the Extract Schema Design as Purchases_Phone2.cxl Schema Designer. and close the Extract
NOTE: When an Extract Schema Design like this one is used as part of the Source specification for a transformation, the transformation Map tab will look as if the input file had been defined to have multiple record types. The email address will be in the last record read by the transformation, of course. If your requirements dictate that the email address be available as actual purchase records are processed, then you will have to use other techniques in a more complex transformation.
Keywords: Extract Schema Designer: Multiple Fields per Line Style (fixed) Description The next file that we will be parsing is Purchases_Mail.txt. We should take a look at it in a text viewer. Although it might be possible to use this report file as a direct input for a transformation, we would have to define it as a multiple-record-type file. Although there are fewer record types than with the phone purchases we dealt with earlier, there are still enough that when combined with the extra processing logic involved, the job would become tedious. So, again, what we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. As before, we dont require multiple passes of the input file. We will just create the extract schema and apply it to the input on the Source tab of our eventual transformation. Exercise 1. From the Repository Explorer, select New Object>Extract Schema. 2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Mail.txt. 3. In the Source Options dialog, on the Extract Design Choices tab, set the Tag Separator to Colon-Space. Also on this tab, ensure that the Trim Leading and Trailing Spaces checkbox is selected. 4. On the Display Choices tab, ensure that the Pad Lines checkbox is selected. 5. Choose OK to accept the selections. 6. Highlight the entire Account Number line in the data.
7. Right-click in the highlight and select Define Data Field>Parse Tagged Data.
8. Highlight the label Purchase Order Number. 9. Right-click in the highlight. 10. Select Define Line Style>New Line Style. 11. Change the Line Style Name to PONumber. 12. Choose Add. 13. Highlight the Purchase Order Number tag and the data following it. 14. Right-click in the highlight. 15. Select Define Data Field>Parse Tagged Data. 16. Define the PO_Date Field using the same technique 17. Define the Category Line Style and the three Fields on it using the same technique. 18. Define the Unit Cost Line Style and the three Fields on it using the same technique. 114 Pervasive Integration Platform Training - End User
19. Define the Line Style that determines the end of a row of data for the Extract File. 20. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, Unit_Cost). 21. Double-click on the Line Style name to bring up the Line Style Definition dialog. 22. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults. 23. Choose Update. 24. Click on the Browse Data Record button. 25. Choose OK to allow assignment of all Fields to the Extract File. 26. Examine the data to ensure that your Field definitions are correct. 27. Close the browser window. 28. Ensure that the Fields are in the order they appear in the input data. 29. Save the Extract Schema Design as Purchases_Mail.cxl. 30. Close the Extract Schema Designer. 31. Remember that this schema can be used as part of a source connection in Map Designer.
Keywords: Extract Schema Designer: Multiple Fields per Line Style (variable)
Description The next file that we will be parsing is Purchases_Fax.txt. We can examine it in a text viewer. Notice that this file has fields with variable lengths so that any given field may not occupy the same column position as it did in the previous record. What we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. As before, we dont require multiple passes of the input file. We will just create the extract schema and apply it to the input on the Source tab of our eventual transformation. Exercise 1. From the Repository Explorer, select New Object>Extract Schema. 2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Fax.txt. 3. In the Source Options dialog, choose OK to accept the defaults. 4. Highlight the literal Order Header and right-click in the highlight. 5. Select Define Line Style>Auto New Line Style>Action - Collect fields. 6. Highlight an Account Number and Right-click in the highlight. 7. Select Define Data Field>New Data Field. 8. Change the Field Name to AccountNumber. 9. Choose Floating Tag. 10. Enter the tag Account Number(. 11. Use first tag starting at position 0. 12. Choose Floating Tag. 13. Enter the tag ) (a single closing parenthesis). 14. Use first tag starting at position 0. 15. Choose Add. 16. Highlight a PO Number and right-click in the highlight. 17. Select Define Data Field>New Data Field. 18. Change the Field Name to PONumber. 19. For the Start Rule, select the first floating tag of PO Number( starting at position 0. 20. For the End Rule, select the first floating tag of ) starting at position 0. 21. Choose Add. 116 Pervasive Integration Platform Training - End User
NOTE: When working with Floating Tags, the starting position for the End Rule is relative to the beginning of the Field being defined- not the beginning of the record. So even though the closing parenthesis for the PONumber is the second one from the beginning of the file, it is only the first one from the beginning of the PONumber. 22. Highlight a PO Date, right-click and select Define Data Field>New Data Field. 23. Change the Field Name to PODate. 24. For the Start Rule, select the first floating tag of PO Date: starting at position 0. Please note that there is a space after the colon. 25. For the End Rule, choose End of Line. 26. Choose Add. 27. Highlight the literal Item and right-click in the highlight. 28. Select Define Line Style>Auto New Line Style>Action - Collect fields. 29. Highlight a Category and right-click in the highlight. 30. Select Define Data Field>New Data Field. 31. Change the Field Name to Category. 32. Choose Add. 33. Highlight a Product Number and right-click in the highlight. 34. Select Define Data Field>New Data Field. 35. Change the Field Name to ProductNumber. 36. For the Start Rule, select the first floating tag of / starting at position 0. 37. For the End Rule, select the first floating tag of (a single space) starting at position 0. 38. Choose Add. 39. Highlight a Quantity, right-click and select Define Data Field>New Data Field. 40. Change the Field Name to Quantity. 41. For the Start Rule, select the third floating tag of (a single space) starting at position 0. 42. For the End Rule, select the first floating tag of / starting at position 0. 43. Choose Add. 44. Highlight a Unit Cost, right-click and select Define Data Field>New Data Field. 45. Change the Field Name to UnitCost. 46. For the Start Rule, select the second floating tag of / starting at position 0. 47. For the End Rule, select the first floating tag of / starting at position 0. 48. Choose Add. 49. Highlight a Shipment Method Code, right-click and select Define Data Field>New Data Field. 50. Change the Field Name to ShipmentMethodCode.
51. For the Start Rule, select the third floating tag of / starting at position 0. 52. For the End Rule, choose End of Line. 53. Choose Add. 54. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, Item). 55. Double-click on the Line Style name to bring up the Line Style Definition dialog. 56. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults. 57. Click Update. 58. Click on the Browse Data Record button. 59. Choose OK to allow assignment of all Fields to the Extract File. 60. Examine the data to ensure that your Field definitions are correct. 61. Close the browser window. 62. Ensure that the Fields are in the order they appear in the input data. 63. Save the Extract Schema Design as Purchases_Fax.cxl. 64. Close the Extract Schema Designer. 65. Remember that this schema can be used as part of a source connection in Map Designer.
EasyLoader
EasyLoader is a flat simple mapper that creates intermediate data load files. It supports single record source data and creates single record, flat, data load files. All targets are predefined by the system. There is a Programmers Guide describing how to create a predefined target. Section 3 below will walk you through creating a simple target. All events and actions required at run time are automatically created and hidden from view at design time. Related Documents: 1. Easy_Loader.pdf a document describing how to use Easy Mapper. 2. Easy_Loader_Dev_Guide.pdf a programmers document describing how to create targets for use with EasyLoader In a default installation, these files are located here: C:\Program Files\Pervasive\Cosmos\Common800\Help Installing EasyLoader EasyLoader is being installed with version 8.12 or newer. You will need to install this version of the Integration products.
predefined Targets and Target schemas automatic addition of events and actions needed to run simplified mapping view auto-matching by name (case-insensitive; Map Designer is case sensitive) single field mapping wizard launched from the Targets fields grid (use wizard to map all fields, or just one field at a time) predefined Target record validation rules specific validation error logging for quick data cleansing
Easy Loader requires a pre-defined Structured Schema and it creates a tf.xml and a map.xml file that can be used in the same ways that Transformations built in Map Designer are used. There are courseware and exercises on the Easy Loader in this document. EasyLoaderTraining.ppt
Selecting the target: 3. Select School Messenger as the target and accept Student as the target schema (Press Next) 4. Accept the default target file location and name (Press Next) Connecting to the source: 5. Accept/Select Ascii (Delimited) as the source (Press Next) 6. Enter Tutor1.asc as the source file name and set the Header=true property. Dont forget to press apply after setting the header property. (Press OK) 7. Make sure the source data looks right (Press Next) Mapping Loop: 8. Target field 1 (School Name) map to constant obtained from default field expression 9. Accept the wizard mapping option (Press Next) 10. Accept Cat Hollow Elemetary as the constant map to School Name (Press Next) 11. You can navigate through the records if you want but the map is to a constant so the mapping results for all records will be the same. Accept the Done option (Press Next) 12. Target field 2 (School Number) map to source field with transformation 13. Accept the wizard mapping option (Press Next) 14. Erase the 14006 value from the Constant textbox. Accept the Show All source field list option and drop down the source field list. Notice that the second column shows some fields that were already mapped. This is because EasyLoader does a match by name for you prior to entering the mapping loop. Select Account No as the source field map to School Number. (Press Next) 15. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next) 16. Select the Left$ function (Press Next) 17. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK) 18. Navigate through some records to view the mapping results which should be the left 2 digits of the Account No source field. Accept the Done option (Press Next) 19. Target field 3 (Student lastname) map to source field 20. Accept the wizard mapping option (Press Next) 21. Select the Show Most Likely source field list option and drop down the source field list. EasyLoader has detected a couple source field names that might match the target. If you do not see the one you need in this list, you can select the Show All source field list option. Select Last Name (Press Next) 22. Navigate through some records to view the mapping results. Accept the Done option (Press Next) 23. Target fields 4 and 5 (Student firstname, Student Address1)
24. Repeat the above steps for target fields 4 & 5 selecting source fields First Name, then Address. 25. Target field 6 (Address2) 26. Accept the wizard mapping option (Press Next) 27. Select the checkbox for No Mapping (Press Next) 28. Notice the Null mapping results. Accept the Done option (Press Next) 29. Target fields 7-9 (City, State and Zip) 30. Move through the mapping dialogs for these fields by continuing to press Next. These fields have already been mapped for you. 31. Target fields 10 & 11 (Home Tel and Mobile Tel) 32. Move through the dialogs for these fields selecting the No Map checkbox. 33. Target field 12 (Age) map to source field with transformation 34. Accept the wizard mapping option (Press Next) 35. Accept the Show All source field list option and drop down the source field list. Notice that the second column shows all previously mapped source fields. This does not mean you can not map the source field to another target field. Select Account No as the source field map to Age. (Press Next) 36. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next) 37. Select the Right$ function (Press Next) 38. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK) 39. Navigate through some records to view the mapping results which should be the right 2 digits of the Account No source field. Accept the Done option (Press Next) 40. Target fields 13 & 14 (Grade Level and Gender) map to constant 41. Accept the wizard mapping option (Press Next) 42. Enter the value 4 into the constant textbox (Press Next) 43. You can navigate through the records if you want but the map is to a constant so the mapping results for all records will be the same. Accept the Done option (Press Next) 44. Repeat for Gender but enter 2 (for Male) 45. Target field 15 (Home Language) map to constant from default field expression 46. Accept all the defaults here (Press Next) 47. Press OK to exit the wizard Saving and Running the Map 48. Press the save toolbar button and save the map. Notice that it is the same map that Map Designer saves. 49. Press the run toolbar button to run the map. Notice that not all the records converted.
50. Press the logfile toolbar button to view the log. EasyLoader predefined targets come with very detailed record validation rules. Page through the log files to view why some of the records did not convert. Summary This exercise allowed you to create a map using the EasyLoader new map wizard. You were then able to save, run the map, and view any data validation errors.
Selecting the target: 3. Select School Messenger as the target and accept Student as the target schema (Press Next) 4. Accept the default target file location and name (Press Next) 5. At this point you will exit the target selection process and be back on the main screen where you will see the target schema. Connecting to the source: 6. Click on the Source Connection toolbar button 7. In the source connection dialog, click on the drop down arrow beside the connection textbox and choose factory connection: ASCII (Delimited) 8. Enter Tutor1.asc as the source file name and set the Header=true property. Dont forget to press apply after setting the header property. (Press OK) 9. At this point you will exit the source connection dialog and be back on the main screen where you will see that we have added a couple of target field expressions. These are the ones found by EasyLoader while doing a case-insensitive match by name. 10. If the test expression panel is not showing in the lower right hand corner of the main window, go select the View, Field Expression Results so that you can navigate through the source records viewing your mapping results in the target field Results column at any time. Mapping the rest of the target fields: 11. Target field 3 (Student lastname) map to source field 12. Drop down the target field expression column for this field and select Fields(Last Name) 13. Target fields 4 and 5 (Student firstname, Student Address1) 14. Repeat the above step for target fields 4 & 5 selecting source fields First Name, then Address. 15. Target field 6 (Address2) 16. Leave blank 17. Target fields 7-9 (City, State and Zip) 18. These are already mapped for you. 19. Target fields 10 & 11 (Home Tel and Mobile Tel) 20. Leave blank. 21. Target field 12 (Age) map to source field with transformation 22. Click on the wizard wand button in the column to the right of this target field expression. Clicking once sets the focus on this button. 23. Click on the button again to launch the single field mapping wizard. This is the same mapping wizard you used in section 1 above.
24. Click on the Show All source field list option and drop down the source field list. Notice that the second column shows all previously mapped source fields. Select Account No as the source field map to Age. (Press Next) 25. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next) 26. Select the Right$ function (Press Next) 27. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK) 28. Navigate through some records to view the mapping results which should be the right 2 digits of the Account No source field. Accept the Done option (Press Next) 29. This will put you back on to the main form with the newly created field expression in the grid. The row has a pencil image in the row selector column meaning that the changes have not been committed yet. Click on the row above this one to commit the changes. 30. Target field 13 (Grade Level) map to constant 31. Click inside the target field expression cell for this field. 32. Type in 4 then click on the row above to commit the change. 33. Target field 14 (Gender Level) map to complex expression 34. Drop down the target field expression column for this field and select <Build Expression> 35. In the expression editor, click on Flow Control in the bottom left tree. Then double click on IfThenElse in the bottom right grid. You will see the following in the expression text box: If condition then statement block1 Else statement block2 End if 36. Highlight the word condition and then select R1 under Source in the bottom left tree. Then double click on Payment. Then add < 300. 37. Replace statement block1 with return 1 38. Replace statement block2 with return 2 39. You will see the following in the expression text box: If Records("R1").Fields("Payment") < 300 then return "1" Else return "2" End if 40. Press OK 127 Pervasive Integration Platform Training - End User
41. Click on the row above this field expression to commit your changes. NOTE: the above transformation makes no sense for Gender, but it shows how to create a complex expression when you need to. 42. Navigate through the source records to view your mapping results. Saving and Running the Map 43. Press the save toolbar button and save the map. Notice that it is the same map that Map Designer saves. 44. Press the run toolbar button to run the map. Notice that not all the records converted. 45. Press the logfile toolbar button to view the log. EasyLoader predefined targets come with very detailed record validation rules. Page through the log files to view why some of the records did not convert. Summary This exercise allowed you to create a map WITHOUT using the EasyLoader new map wizard. You were then able to save, run the map, and view any data validation errors.
13. Click on the Record Types target tree node. In the grid to the right you should see one record type named R1. Rename this record to the name of the schema you plan to create. For example, if your schema is a Customer record, rename the record Customer. After changing the R1 name, click on the next grid cell over to commit your changes. 14. Click on the newly named RecordName Fields target tree node (ie. Customer Fields). The grid to the right is for entering the target schema fields. NOTE: we are going to manually enter the schema fields. If you have a schema that you want to import, you can stay on the Record Types node and follow Map Designers instructions for using the Schema Origin column in the record grid to import a schema. If you do this, remember that target MUST be single record, flat targets. If you have a target schema that is hierarchical, youll need to flatten it out into 1 record type. 15. When creating a schema for use with EasyLoader, it is important to do the following: Use easy to understand field names Add very specific field descriptions including any limitations (ie. Max value 1000 or Possible values are M and F). Set the Field Required and Default Expr field properties if applicable. If Boolean datatype, be sure to enter the Picture field property. 16. Add the following fields to your Customer record (Name, Desc, Dtype, Size, Required, Default Expr): Name, Customer name (First and Last separated by space or it could be a business name.,Text, 100, Yes) Country, Customer country of residence. Possible values are USA, Canada, Mexico., Text, 25, Yes, USA IsActive, True indicates customer is an active account. Possible values are 0 (for false) or 1 (for true)., Numeric, 1, No, 1 17. At this point we are going to save the schema. Click on the Record Types target tree node again. Then right mouse and select Save Schema As When the dialog comes up, name the schema Customer. Press OK. This will create Customer.ss.xml. 18. Notice when you are done that the Customer Fields node has a lock on it. Click on this tree node and right mouse, then select Unlock Schema. We want to edit it some more. Create Record Validation Rules for your schema 19. Click on the Customer Rules target tree node to expand it. Then Click on the Customer Validation target tree node. You will see a grid to the right allowing for ONE row of RIFL validation code. NOTE: If your validation rules approach the 32767 character limit, you will need to create functions out of your rules, store the functions in a rifl code module and call the functions from within this Record Validation Rule expression. See the Programmers manual on how to do this. 20. Click on the button inside the grid to take you out to the expression editor for making the validation rules. The following was taken directly from the Programmers Guide describing the format your validation rules should take. The record validation rules should be in the following format: Beginning:
A comment that specifically reads: TargetName_Schema Validation Rules Dim any local variables you intend to use in your rifl validation expression including a Boolean value to return at the end indicating if the record is valid or not AND a record identifier variable. Some code that initializes the boolean variable to return at the end and initializes a record identifier to use when logging validation errors. The record identifier variable will only be set once. Think of it as the records key. Middle: 1 to N validation rules End: Return validation boolean variable Aside, from this format, your validation logic has access to a global reccnt variable that will hold the value for the current record being read and transformed. Your validation logic should check for validation errors and when found, use the reccnt and fldid (record identifier) variables to log a very descriptive validation error. An example for our Customer record might look like this: 21. Enter the following into the expression editor: 'MyTarget_Customer Validation Rules Dim isvalidrecord the boolean validation variable to return in the end Dim temp a temporary variable Dim fldvalue a variable to hold a target field value Dim fldid a variable to hold the target record identification value isvalidrecord = true Initializes the variable fldid = Targets(0).Records(0).Fields("Customer") set record key for use in LogMessage 'Check Customer record not blank or null fldvalue = Targets(0).Records(0).Fields("Customer") If (fldvalue == "" Or IsNull(fldvalue)) then Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid Customer value( & fldvalue & ). It should not be blank or null.") isvalidrecord = false
End if 'Check Country is USA, Canada or Mexico fldvalue = Targets(0).Records(0).Fields("Country") Select Case fldvalue Case USA, Canada, Mexico Temp = 1 needed because case must have a statement; ignored Case else Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid Country value ( & fldvalue & ). It should be USA, Canada or Mexico.") isvalidrecord = false End if 'Check IsActive is 0 or 1 (for false or true) fldvalue = Targets(0).Records(0).Fields("IsActive") If (fldvalue <> "0" And fldvalue <> 1) then Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid IsActive value ( & fldvalue & ). It should be 0 or 1 for false or true.") isvalidrecord = false End if 'Done, return boolean value return isvalidrecord toolbar button in the expression editor to make sure the expression 22. Click the validate you have typed is valid. Fix any errors. 23. Click OK 24. Click back on the Record Types target tree node, right mouse and select Save Schema As to resave the schema. Overwrite Customer.ss.xml saved previously with no record validation rules. NOTE: at this point you should test what you have written by connecting to a source, entering a target excel file name, map some fields, save the map and run it. Open the log file to see if you got the validation error results you expect. Any problems should be corrected and the schema should be resaved. Dont forget to Unlock Schema every time after a save to allow for continued editing. 25. Once you have fully tested your schema, move Customer.ss.xml file to the Targets\MyTarget subdirectory created above.
Testing your target within EasyLoader 26. Launch EasyLoader 27. Click on the New Map toolbar button to launch the wizard 28. Choose your target and schema (MyTarget, Customer) 29. Continue through the wizard as in section 1 above, connecting to a source and mapping your 3 fields. 30. Save and run the map. Compare the results with the test your ran above in map designer when testing the schema. If problems arise, go back into Map Designer to edit the schema. NOTE: schema creation can also be done inside Structured Schema Designer. But if your validation rules approach the max character limit where you have to start creating a RIFL code module to house your validation rule functions, you can only create code modules within Map Designer. Summary This training module taught you how to create a target and a target schema for use within EasyLoader. You can create as many target schemas as you like (ss.xml files) for a give target and you can create as many different targets as you want to support.
Process Designer is a graphical data transformation management tool you can use to arrange your complete transformation project. With Process Designer, you can organize Map Designer Transformations with logical choices, SQL queries, global variables, Microsoft's DTS packages, and any other applications necessary to complete your data transformation. Once you have organized these Steps in the order of execution, you can run the entire workflow sequence as one unit. IntegrationArchitect_ProcessDesigner.ppt
Creating a Process
Objectives At the end of this lesson you should be able to create a simple Process Design. Keywords: Process Designer, Transformation Map, and Component Description To create a new Process, first consider what is necessary to accomplish the complete transformation of your data. Form a general idea of the logical steps to reach your goal, including which applications you need, and what decisions must be made during the Process. Once you have a good idea of what will be involved, open Process Designer (via the Start Menu or Repository Explorer) and begin. Remember that Process Steps can be re-arranged, deleted, added, or edited as you build your design. Exercise 1. Open Process Designer. 2. Add a Transformation step to the Process Design. 3. Right Click on the Transformation Map and choose Properties. 4. Click Browse and choose OutputModes_Clear_Append.map.xml from a previous exercise or from the solutions folder. Note: A Process Designer SQL Session is a particular method of connecting to the given SQL application's API. We can use the same session in multiple steps or create new sessions wherever needed. We must have at least one session if any connection to a relational database is made during the process.
5. Lets accept the default here by clicking OK. 6. Name this step Load_Accounts. 7. Add another Transformation step to the Process Design. 8. Right Click on the Transformation Map and choose Properties. 9. Click New to open the Map Designer. 10. Create a new map that loads Category.txt into the tblCategories table in the TrainingDB Database. Use the report below for specifications. (ASCII (Delimited))
location $(funData)Category.txt
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.Code R1.Category R1.ProductManager
11. Accept the default for the Transformation Step dialog. 12. Choose Use an existing session for the target in the Sessions Dialog. 13. Name step Load_Categories. 14. Create a new map that loads ShippingMethod.txt into the tblShippingMethod table in the TrainingDB Database. Use the report below for specifications. (ASCII (Delimited))
location $(funData)ShippingMethod.txt
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.Shipping Method Code R1.Shipping Method Description
15. Accept the default for the Transformation Step dialog. 16. Choose Use an existing session for the target in the Sessions Dialog. 17. Name step Load_ShippingMethod. 18. Establish the Step Sequence. 19. Validate the Process Design. 20. Save the Process as Load_Tables. 21. Run the Process Design. 22. Examine the Target Tables.
There follows some information taken from reports generated by Repository Manager from the Load_Tables process in the Solutions folder: Start (Start) LoadAccounts (Transformation)
../MapDesigner_TransformationFundamentals/OutputModes_Clear_Append.map.xml ODBC3x-1
Unconditional
LoadCategories (Transformation)
processname Predecessors LoadAccounts Unconditional LoadtblCategories.map.xml
LoadShippingMethod (Transformation)
processname Predecessors LoadCategories Unconditional LoadtblShippingMethod.map.xml
Stop (Stop)
Predecessors LoadShippingMethod Unconditional
Exercise 1. Open Process Designer. 2. Add a Transformation step to the Process Design. 3. Right Click on the Transformation Map and choose Properties. 142 Pervasive Integration Platform Training - End User
4. Click Browse and choose Reject_Connect_Info.map.xml from a previous exercise or from the solutions folder. 5. Accept the default for the Transformation Step and the Sessions dialog. 6. Name step LoadAccounts_CheckDates. 7. Add a Decision step to the Process Design. 8. Right-click on the Decision icon and select Properties. 9. Name the step Eval_RejectRecordCount. 10. Using the Step Result Wizard, create and add the following code: project("ZipCode").RejectRecordCount > 0 11. Click OK to close. 12. Add a Scripting step to the Process Design. 13. Right-click on the Scripting icon and select Properties. 14. Use NotificationBadDates as the Step Name. 15. Use the Build button to build an expression that will display There are STILL invalid dates!!" in a message box with a stop icon and an OK button and the title Invalid Date Warning MsgBox("There are STILL invalid dates!!", 16, "Invalid Date Warning") 16. Click OK to close. 17. Link the Start step to the Transformation step 18. Link the Transformation step to the Decision step 19. Link the Decision step to the Stop step (this path should be followed if the Decision evaluates to False) 20. Link the Decision step to the Scripting step (this path should be followed if the Decision evaluates to True) 21. Link the Scripting step to the Stop step 22. Validate the Process Design 23. Save your Process Design as ConditionalBranching_StepResultWizard.ip.xml 24. Run the Process Design. There follows some information taken from reports generated by Repository Manager from the ConditionalBranching_StepResultWizard process in the Solutions folder:
Sessions
ODBC3x-1 (ODBC 3.x)
Database TrainingDB
Stop (Stop)
Predecessors EvalRejectRecordCount NotificationBadDates False Unconditional
EvalRejectRecordCount (Decision)
Project("LoadAccounts_CheckDates").RejectRecordCount > 0
Predecessors LoadAccounts_CheckDates Unconditional
NotificationBadDates (Scripting)
MsgBox("There are STILL invalid dates!!", 16, "Invalid Date Warning")
Predecessors EvalRejectRecordCount True
Stop (Stop)
Predecessors EvalRejectRecordCount False
NotificationBadDates
Unconditional
Variables myFiles () This array contains a list of file names passed from the FileList function.
Variant(0) myFileCounter () This variable is used for the index of the myFiles array.
Variant
-1
myPath () This variable used to store the path of the "inbox" directory. Consider using a lookup or user input to change this programmatically.
Variant myCurrentFile () This variable used in the ChangeSource action within the Map step. The map initially points to a "NUL:" source file. This will change it to the next/current file name in the array.
Variant 2. Put a Transformation step onto the Canvas. Browse to the OutputModes_Clear_Append.map.xml from a previous exercise or from the solutions folder. 3. Accept the default in the Sessions dialog. 4. Name the step LoadAccountsTable. 5. Put in a scripting step as described below: 150 Pervasive Integration Platform Training - End User
BuildFileList (Scripting)
' Set directory for incoming files. ' Consider using lookup or user input for this value. myPath = MacroExpand("$(funData)") & "InBox\" ' Gather list of file names. Use wildcards if needed. FileList(myPath & "AddrChg*.*", myFiles()) ' Set array index counter (Zero based). myFileCounter = UBound(myFiles)
Predecessors LoadAccountsTable Unconditional
' Set var for use in Map. ' This var will be used in ChangeSource action. myCurrentFile = myPath & myFiles(myFileCounter) ' Verification... Dim A A = Ubound(myFiles) - myFileCounter MsgBox("File name = " & myFiles(myFileCounter) & " "& "File " & A + 1 & " of " & Ubound(myFiles)+1)
' Use Return statement to exit this module Return ' Error handler myError: ' Get the error info and check variable values MsgBox("Err.Number = " & Err.Number & " "& "& "& "& "Err.Description = " & Err.Description & " "myPath=" & myPath & " "myFileCounter=" & myFileCounter & " "myFiles(0)=" & myFiles(0))
' This might only be terminating the step... LogMessage("ERROR","err.number = " & Err.number) Terminate()
Predecessors GotFiles? True
9. Put in a Transformation step and click New. Then build a map to the specifications below: Source (ASCII (Delimited))
location Nul:
SourceOptions
header
True
Update AccountNumber
Update Mode Options Update ALL matching records and ignore non-matching records.
Map Expressions
R1.AccountNumber R1.Street
Variables
Name myPath myCurrentFile Type Variant Variant Public yes yes Value
myFileCounter
Variant
yes
-1
MapEvents
BeforeTransformation
source name connection string Source "+File=" & myCurrentFile
ChangeSource
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
10. Save the map as UpdateAddresses and close Map Designer. 11. Use the same session as was created in the first Transformation Step. 12. Name this step UpdateAdds. 13. Put in a Decision step as described below: SuccessCheck (Decision)
Project("UpdateAdds").ReturnCode == 0
Predecessors UpdateAdds Unconditional
15. Put in a scripting step as described below: 154 Pervasive Integration Platform Training - End User
Notification_UpdateFailure (Scripting)
MsgBox("Update Address Map Failed")
Predecessors SuccessCheck False
16. Connect the steps as in the screen shot above the exercise instructions. 17. Validate the process. 18. Save it as FileListLoop. 19. Run it and note the results.
Integration Engine
Integration Engine is an embedded data Transformation engine used to deploy runtime data replication, migration and Transformation jobs on Windows or UNIX-based platforms quickly and easily without costly custom programming. It fills the need for a low-cost, universal data transformation engine. The Integration Engine is a 32-bit data transformation engine written in C++, containing the core data driver modules that are the foundation for the transformation architecture. Because the Integration Engine is a pure execution engine with no user interface components, it can perform automatic, runtime data transformations quickly and easily, making it ideal for environments where regular data transformations need to be scheduled and launched on Windows or UNIX-based systems.
Execute A Transformation
Objectives This lesson shows how to execute a Transformation Map via the command line interface. Keywords: Executing a Map Description At the command prompt type: djengine MapName.tf.xml
Tip: You can drag and drop transformation file name from a Windows Explorer window to command line
Add verbose at end of command to get statistics printed to the console during runtime. At the command prompt type: djengine C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTest.tf.xml -verbose
The Define_Macro command allows us to define individual Macros on the command line. At the command prompt type: djengine -Define_Macro Data=C:\Cosmos_Work\ Fundamentals\Data\ C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithMacro.tf.xml verbose
Note that only 54 records were written. Note also that we did not need to define the Macro or the path to the Macro File. The Macro in the map was only used in the source connection and we defined a new source with a complete path. So the Macro was no longer relevant.
Executing a Process
Keywords: Using the Process Design Option Command syntax is djengine -process_execute file name (include path) At the command prompt type: djengine -Macro_File C:\Cosmos_Work\Workspace1\macrodef.xml C:\Cosmos_Work\ Fundamentals\Solutions\ProcessDesigner_DataIntegrator\CreatingAProcess.ip.xml verbose Note that we had to use the Macro_File command because some of the Maps in the process had a Macro as part of the source connection.
Note that without the Verbose command the only command line indication that the Map ran correctly is a single line, Return Code : 0 Now lets change the value of the variable. For a string with a single word, type at the command prompt: djengine -se myVar=\"NewValue\" C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithVar.tf.xml
For a string with multiple words, type at the command prompt: djengine -se myVar=\"New Value\" C:\Cosmos_Work\Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwith Var.tf.xml
Additional notes: Aside from normal command line quoting/escaping sequences for the given operating system, what is to the right of the equals sign will be used verbatim in an expression to set the variable. On windows, the only command line quote character is the double quote, and it is escaped using a backslash. By using -se gblsStartDate='07-09-1976' you are causing the expression gblsStartDate = '07-09-1976' to be executed, which of course does nothing since the single quote indicates the start of a comment. By using -se gblsStartDate=07-09-1976 you are causing the expression gblsStartDate = (07 09) 1976 to be executed. If you use -se gblsStartDate="07-09-1976" you will get the same results as above (as if the quotes weren't present). However, if you use -se gblsStartDate=\"07-09-1976\" the expression gblsStartDate = "07-091976" will be executed, which is what you want. Note that this also means you can do something like -se gblsStartDate=now() and have gblsStartDate = now() executed.
Scheduling Executions
Keywords: Using NT Task Scheduler To set up a Windows task: Go to Programs>Accessories>System Tools>Scheduled Tasks>Add Scheduled Task The Wizard will walk you through the steps.
Mapping Techniques
This section explores the capabilities of Transformation Map Designer in more detail.
A1 A2 A3 E2 A1 A2 To duplicate this in your real world situations, the trick is to know where to put your ClearMapPut event handlers that write target records. You make that decision by knowing what is in the source buffer or buffers. (The Source Buffer is the internal object that stores the values that have just been read in from a source record. There is one buffer for each source record type.) And you need to understand what you need in the target. When the source buffers have all the data that you need is when you write a target record. One more thing: As a general rule, you need one action (ClearMapPut is most common) that writes a target record per every target record type.
Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result.
There follows some information taken from reports generated by Repository Manager from the One_to_ManyRecordTypes transformation in the Solutions folder: Source (ASCII (Delimited))
location $(funData)Autos_Sorted.txt
SourceOptions
header True
outputmode
Replace
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target Auto false
OnDataChange1
ClearMapPut Record
Target Employee false
Record Auto
Name RecordID Initials Year Make Color Total Type Text Text Text Text Text Length 1 2 4 10 5 22 Description
Map Expressions
Employee.RecordID Employee.Initials Employee.Phone Employee.City Employee.State Auto.RecordID Auto.Initials Auto.Year Auto.Make Auto.Color
"E" Records("R1").Fields("Initials") Records("R1").Fields("Phone") Records("R1").Fields("City") Records("R1").Fields("State") "A" Records("R1").Fields("Initials") Records("R1").Fields("Year") Records("R1").Fields("Make") Records("R1").Fields("Color")
With this exercise well take the file that we created in the last exercise and change it back into the format it had before. So here we have this: E1 A1 A2 A3 E2 A1 A2 And we want this in our target: E1,A1 E1,A2 E1,A3 E2,A1 E2,A2
Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result.
There follows some information taken from reports generated by Repository Manager from the Many_to_OneRecordType transformation in the Solutions folder: Source (ASCII (Delimited))
location $(funData)SrcAutosRecordType.txt
Structured Schema
originallocation schemaname xmldb:ref:///C:/Cosmos_Work/Fundamentals/Solutions/MapDesigner_MappingTechniques AutosRecordType.ss.xml
Auto Events
AfterEveryRecord ClearMapPut Record
Target R1 false
TargetOptions
header True
Map Expressions
R1.Initials
Records("Employee").Fields("Initials")
Exercise Simply open any RIFL Script in the Editor window and click the Save button on the toolbar. This saves a text file with a RIFL extension somewhere on your network. To reuse the script, click the Open Folder toolbar button in another Script editor window. You will need to manually change any parameters for use in the new Script window. Next, we will show you how to make the functions more flexible by abstracting them into User Defined Functions and storing them in Code Modules.
Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the CodeReuse_UserDefinedFunction transformation in the Solutions folder: Code Module : $(funData)Scripts\ZipCodeLogic.rifl Source (ASCII (Delimited)) location $(funData)Accounts.txt
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.Account Number R1.Zip R1.ZipReport
Lookup Wizards
Lookup Wizards automate the process of creating lookups for your Transformations. You name the lookup or select an existing lookup to be edited, browse to files or tables to automatically build connection strings and select the key and returned fields. At the end of each Lookup Wizard, a reusable code module is created in your workspace containing the functions you need for doing lookups. The Code Module files generated by these wizards can then be reused in any Map you create. There are three types of Lookup methodologies and each has their advantages in certain situations: Static Flat File Lookups are fast but not very portable or dynamic. Dynamic SQL Lookups are portable and dynamic but not very fast. Incore Table Lookups are extremely fast and can be made more dynamic with extra RIFL code but they use core memory to store the data.
Keywords: Lookup Wizard, Count & Counter Variable parameters, One-to-Many records (unrolling occurrences), and referencing Target Field values
Description Flat File Lookups allow us to look up data from a file that is not our source. We reference this data with a key value that does come from the source and returns matching data or a default value if no matches are found. The Lookup Function Wizard allows us to build these customized functions and store them in a code module. We will also be unrolling a data field that contains multiple values. The Favorites categories are all stored in one field with a pipe delimiter separating them. We will create a unique target record for each of the values stored in a single source record. The Count and Counter Variable parameters of the ClearMapPut action can be used to parse this field and unroll the records dynamically. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the FlatFileLookup transformation in the Solutions folder:
Source R1 Events
AfterEveryRecord
target name record layout count Target
ClearMapPut Record
R1
' Evaluate source field to determine how many occurrences of data exist. ' This is translated to the number of child records written to the Favorites table. CharCount("|",Records("R1").Fields("Favorites")) + 1
myFavoritesCounter
false
Map Expressions
R1.FavoritesID R1.Account Number R1.CategoryCode R1.CategoryLiteral
Serial() Records("R1").Fields("Account Number") parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|") myCategories_Field2_Lookup(Targets(0).Records("R1").Fields("CategoryCode"), "NoMatches") myCategories_Field3_Lookup(Targets(0).Records("R1").Fields("CategoryCode"), "NoManagers")
R1.ProductManager
Keywords: Lookup Wizard, Dynamic SQL Lookup, Count & Counter Variable parameters, One-to-Many records (unrolling occurrences), and referencing Target Field values
Description Dynamic SQL Lookups allow us to look up values from other sources when that source is a relational table or view. Again we will use the Lookup Function Wizard to create User Defined Functions that are stored in a code module. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the DSQLLookup transformation in the Solutions folder: Code Module: $(funData)Scripts\myCategories.dynsql.rifl Source (ASCII (Delimited)) location $(funData)Accounts.txt
Variables
Name CatImp Type DJImport Public yes Value
MapEvents
BeforeTransformation
expression 'Initialize the DJImport object myCategories_Init()
Execute
AfterTransformation
expression myCategories_Terminate()
Execute
Source R1 Events
AfterEveryRecord
target name record layout count Target
ClearMapPut Record
R1
' Evaluate source field to determine how many occurrences of data exist. ' This is translated to the number of child records written to the Favorites table. CharCount("|",Records("R1").Fields("Favorites")) + 1
myFavoritesCounter
false
Map Expressions
R1.FavoritesID R1.Account Number R1.CategoryCode R1.CategoryLiteral
Serial() Records("R1").Fields("Account Number") parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|") myCategories_Category_Lookup _ (Targets(0).Records("R1").Fields("CategoryCode"), "NoMatches") myCategories_ProductManager_Lookup _
R1.ProductManager
(Targets(0).Records("R1").Fields("CategoryCode"), "NoManagers")
Keywords: Lookup Wizard, Incore Memory Table & Lookup, Count & Counter Variable parameters, One-to-Many records (unrolling occurrences), and referencing Target Field values
Description An Incore memory table lookup can be utilized when speed is of the utmost importance. The primary method of creating the incore table is to make use of a DJImport object much the same as we did with the Dynamic SQL lookup. However, you will take the record set returned by the Select statement and store it in a memory table. The memory table will then be accessed to perform the lookup. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the InCoreLookup transformation in the Solutions folder: Code Module: $(funData)Scripts\myCategories.itable.rifl Source (ASCII (Delimited)) location $(funData)Accounts.txt
MapEvents
BeforeTransformation
expression myCategories_Init()
Execute
AfterTransformation
expression myCategories_WriteToFile("$(funData)myCategoriesFile.txt", "|")
Execute
Source R1 Events
AfterEveryRecord
target name record layout count Target
ClearMapPut Record
R1
' Evaluate source field to determine how many occurrences of data exist. ' This is translated to the number of child records written to the Favorites table. CharCount("|",Records("R1").Fields("Favorites")) + 1
myFavoritesCounter
false
Map Expressions
R1.FavoritesID R1.Account Number R1.CategoryCode R1.CategoryLiteral
Serial() Records("R1").Fields("Account Number") parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|") myCategories_Category_Lookup _ (Targets(0).Records("R1").Fields("CategoryCode"), "NoMatches") myCategories_ProductManager_Lookup _ (Targets(0).Records("R1").Fields("CategoryCode"), "NoManagers")
R1.ProductManager
RDBMS Mapping
TargetOptions
header True
Outputmode Replace
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target
target name
record layout
R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites R1.StandardPayment R1.LastPayment R1.Balance
Fields("AccountNumber") Fields("Name") Fields("Company") Fields("Street") Fields("City") Fields("State") Fields("Zip") Fields("Email") Fields("BirthDate") Fields("Favorites") Fields("StandardPayment") Fields("LastPayment") Fields("Balance")
Integration Querybuilder
Objectives At the end of this lesson you should be able to extract data from one or more tables in the same database by using a SQL Passthrough statement. Keywords: Integration Query Builder, SQL Passthrough Statements Description The Transformation Map designer source connectors allow for passing Select statements through to a database server to obtain a row set. The resultant row set that is returned by the query then becomes the source data for your Map. Use the Integration Query Builder to generate the source record set. Alternatively, you can use the SQL script that generates this source record set by using the SQL File connection option and pointing to the matching SQL Script file in the Scripts folder. When you choose an RDBMS source connector, an additional choice appears on the Source tab. You can now choose whether you want to point directly to a table or view, pass a SQL statement through, or point to a SQL script file that already has a SQL statement in it. We will construct our own using the query builder. Exercise Once you have connected to a data source, (described below) your connection is displayed in the upper-right pane. You can set up and save as many data source connections as you need. Integration Querybuilder stores all connections you create unless you explicitly delete them. 1. Double-click the connection you want to use. The DB Browser in the lower-right pane will display the database. 2. Click the database icon to display the icons for tables, views and procedures for this database. Clicking on these will display their contents. Click on the individual tables to list their columns, or right-click and select Get Details from the shortcut menu to see the SQL representation of column values such as length, data types and whether they are used as primary or secondary keys. 3. To create a query, select New Query from the Query menu. A new query icon will be opened beneath the connection icon in the upper-right pane. You can rename this now or later by Integration Querybuilder Right-click on the icon. 4. Drag the tables and views you want to use into the upper-left pane. This is called the Relations pane. As you drag tables into this pane, you will see that SELECT... FROM statements are created in the SQL pane. If tables are already linked in the database, these links will be displayed, although these can be changed or removed for the purpose of this particular query. If you are using a table more than once, the second and further copies will be renamed. For example, if you already have a Customer table in the Relations pane and you drag across another copy, it will be automatically renamed Customer1.
The Select statement that is generated becomes part of the connection string and it is passed through to the database server. We can now map this data into any target type and format we desire. There follows some information taken from reports generated by Repository Manager from the RDBMS_SelectStatements transformation in the Solutions folder:
srcPurchases.PONumber, srcPurchases.Category, srcPurchases.ProductNumber, srcPurchases.ShipmentMethodCode FROM (srcAccounts RIGHT JOIN srcPurchases ON srcAccounts.[Account Number] = srcPurchases.AccountNumber) ORDER BY srcPurchases.ShipmentMethodCode, srcAccounts.City
TargetOptions
header True
Outputmode Replace
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.Account Number R1.Name R1.Company
Keywords: Integration Query Builder, DJX Syntax, and Dynamic Row Sets via User Interaction, InputBox Description With DJX, you escape into the RIFL (Rapid Integration and Flow Language) expression language where you can design SQL statements dynamically. This allows you to pull values into an SQL Statement. For instance, if you wanted only the records that were entered into a table yesterday, you could use the RIFL Date function to return the current system date and the Dateadd function to subtract one day. Then make that value part of your select statement via the DJX. This exercise will pull in the records from the tblAccounts table that are from a particular state. That state will be passed into the SQL select statement at runtime. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the DJXSelectStatement transformation in the Solutions folder:
Target (HTML)
location $(funData)AccountsbyState.htm
TargetOptions
index mode tableborder False table True
Outputmode Replace
Variables
Name varState Type Variant Public no Value
MapEvents
BeforeTransformation
expression varState = InputBox("Enter the two letter code for the State", "State Input", "TX")
Execute
Source R1 Events
AfterEveryRecord ClearMapPut Record
Target R1
Map Expressions
R1.AccountNumber R1.Name R1.Company R1.Street R1.City R1.State R1.Zip R1.Email R1.BirthDate R1.Favorites
Fields("AccountNumber") Fields("Name") Fields("Company") Fields("Street") Fields("City") Fields("State") Fields("Zip") Fields("Email") Fields("BirthDate") Fields("Favorites")
Multimode Introduction
Keywords: Multimode Functionality, Insert Action, and Count Parameter Multimode is a functionality that allows us to write to more than one table in the same database within the same Transformation. The Account Numbers in the Accounts.txt file all start with either 01 or 02. The ones that start with 01 are trading partners. We want to set up a Transformation that will run those records into our tblTradingPartners table in the TrainingDB Database. The records that start with 02 are individual customers and we want them to go into the tblIndividuals table. Exercise 1. Create our map based on the specifications given below. 2. Run the map and observe the result. There follows some information taken from reports generated by Repository Manager from the Mulitmode_Introduction transformation in the Solutions folder:
MapEvents
BeforeTransformation
target name table name Target tblIndividuals
Drop Table
BeforeTransformation
Drop Table
Target tblTradingPartners
BeforeTransformation
target name record layout table name Target TradingPartners tblTradingPartners
Create Table
Source R1 Events
AfterEveryRecord
target name record layout table name count Target Individuals tblIndividuals 'Evals customer code and sets Count to 1 if Individual If Left(Records("R1").Fields("Account Number"), 2) = "02" Then 1 Else 0 End if
ClearMapInsert Record
AfterEveryRecord
target name record layout table name count Target
ClearMapInsert Record
Record TradingPartners
Name Account Number Name Company Type CHAR CHAR CHAR Length 9 21 31 Description
Street City State Zip Email Standard Payment Payments Balance Total
35 16 2 10 25 20 20 20 209
Map Expressions
Individuals.Account Number Individuals.Name Individuals.Street Individuals.City Individuals.State Individuals.Zip Individuals.Email Individuals.Birth Date Individuals.Favorites Individuals.Standard Payment Individuals.Payments Individuals.Balance
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") DatevalMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy") Records("R1").Fields("Favorites") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Map Expressions
TradingPartners.Account Number
Records("R1").Fields("Account Number")
TradingPartners.Name TradingPartners.Company TradingPartners.Street TradingPartners.City TradingPartners.State TradingPartners.Zip TradingPartners.Email TradingPartners.Standard Payment TradingPartners.Payments TradingPartners.Balance
Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Variables
Name rejectReason Type Variant Public no Value "NoReasonAtAll"
MapEvents
BeforeTransformation Drop Table
target name table name Target tblEntity
BeforeTransformation
target name table name Target tblFavorites
Drop Table
BeforeTransformation
target name table name Target tblPayments
Drop Table
BeforeTransformation
target name record layout table name Target Favorites tblFavorites
Create Table
BeforeTransformation
target name Target
Create Table
Payments tblPayments
BeforeTransformation
target name record layout table name Target Rejects tblRejects
Create Table
BeforeTransformation
target name record layout table name index name unique Target Favorites tblFavorites idxFavorites True
Create Index
BeforeTransformation
target name record layout table name index name unique Target Payments tblPayments idxPayments false
Create Index
BeforeTransformation
target name record layout table name index name unique Target Rejects tblRejects idxRejects false
Create Index
Source R1 Events
AfterEveryRecord ClearMapInsert Record
Target Entity tblEntity
AfterEveryRecord
target name record layout table name count counter variable Target
ClearMapInsert Record
AfterEveryRecord
target name record layout table name
ClearMapInsert Record
Target Payments tblPayments
Target Events
OnConstraintError Execute
expression
rejectReason = "General-OnConstraint"
OnConstraintError
ClearMapInsert Record
Target Rejects tblRejects
OnConstraintError OnError
expression
Resume
Execute
rejectReason = "General-OnError event"
OnError
ClearMapInsert Record
Target Rejects tblRejects
OnError
Resume
Map Expressions
Entity.Account Number Entity.Name Entity.Company Entity.Street Entity.City Entity.State Entity.Zip Entity.Email Entity.Birth Date
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") DateValMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy")
Map Expressions
Favorites.Account Number Favorites.FavoritesID
Records("R1").Fields("Account Number") Serial(0) ' Starts at 1 each execution. Consider using a lookup to get Max Value first. Parse(cntFavorites, Records("R1").Fields("Favorites"), "|")
Favorites.Favorites
Map Expressions
Payments.Account Number Payments.PaymentID
Records("R1").Fields("Account Number") Serial(0) 'Starts at one each execution. Consider using lookup for Max Value Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Payments.Payments Payments.Balance
Map Expressions
Rejects.Account Number Rejects.RejectID
Records("R1").Fields("Account Number") Serial(0) 'Starts at one each execution. Consider using lookup for Max Value ' Use to set your own error message rejectReason Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") Records("R1").Fields("Birth Date")
Rejects.RejectReason
Variables
Name varUpsertFlag Type Variant Public no Value 0
Map Events
BeforeTransformation
target name table name Target tblIndividuals
Drop Table
BeforeTransformation
target name table name Target tblTradingPartners
Drop Table
BeforeTransformation
target name record layout table name Target TradingPartners tblTradingPartners
Create Table
Source R1 Events
AfterEveryRecord
target name record layout table name count Target Individuals tblIndividuals 'Checks the Upsert Flag If varUpsertFlag = 0 then 'Evals customer code and sets Count to 1 if Individual If Left(Records("R1").Fields("Account Number"), 2) = "02" Then 1 End If
ClearMapInsert Record
Else 0 End if
AfterEveryRecord
target name record layout table name count Target
ClearMapInsert Record
TradingPartners tblTradingPartners 'Checks the Upsert Flag If varUpsertFlag = 0 then 'Evals customer code and sets Count to 1 if Trading Partner If Left(Records("R1").Fields("Account Number"), 2) = "01" Then 1 End If Else 0 End if
AfterEveryRecord
target name record layout count Target
ClearMap
AfterEveryRecord
target name record layout count Target
ClearMap
AfterEveryRecord
target name record layout table name Target
Upsert Record
Individuals tblIndividuals
count
'Checks the Upsert Flag If varUpsertFlag = 1 then 'Evals customer code and sets Count to 1 if Individual If Left(Records("R1").Fields("Account Number"), 2) = "02" Then 1 End If Else 0 End if
AfterEveryRecord
target name record layout table name count Target
Upsert Record
TradingPartners tblTradingPartners 'Checks the Upsert Flag If varUpsertFlag = 1 then 'Evals customer code and sets Count to 1 if Trading Partner If Left(Records("R1").Fields("Account Number"), 2) = "01" Then 1 End If Else 0 End if
Source Events
OnEOF ChangeSource
Source If varUpsertFlag = 0 then varUpsertFlag = 1 "+File=$(funData)AccountsUpdate.txt" End if
Map Expressions
Individuals.Account Number Individuals.Name Individuals.Street Individuals.City Individuals.State Individuals.Zip Individuals.Email Individuals.Birth Date Individuals.Favorites Individuals.Standard Payment Individuals.Payments Individuals.Balance
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") DatevalMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy") Records("R1").Fields("Favorites") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Map Expressions
TradingPartners.Account Number TradingPartners.Name TradingPartners.Company TradingPartners.Street TradingPartners.City TradingPartners.State TradingPartners.Zip TradingPartners.Email TradingPartners.Standard Payment TradingPartners.Payments TradingPartners.Balance
Records("R1").Fields("Account Number") Records("R1").Fields("Name") Records("R1").Fields("Company") Records("R1").Fields("Street") Records("R1").Fields("City") Records("R1").Fields("State") Records("R1").Fields("Zip") Records("R1").Fields("Email") Records("R1").Fields("Standard Payment") Records("R1").Fields("Payments") Records("R1").Fields("Balance")
Management Tools
The Pervasive Integration Platform has several tools that perform various tasks that are convenient for users who are working with our main applications.
Upgrade Utility
This Upgrade Utility allows you to update existing Transformations created in previous versions of Map Designer to the current version of Map Designer 8.x.
To Upgrade from v7.xx 1. In the MDB Database box, type in the location of the Transformation you wish to convert from, including the complete path. Or you may click the Find button and navigate to the correct location, then click Open.
To Upgrade to v8.x 1. In the Workspace box, type in the name of the Workspace you wish to save the new Transformation. Or you may click on the Change button and access the Workspace Manager to select your Workspace, then click OK. 2. In the Repository URL box, type in the location of the new files, including the complete patch. Or you may click on the Change button and select the correct Repository, then click OK. To Start Upgrade 1. Click the Start Upgrade button to run the upgrade. The Upgrade Utility first converts the Record Layouts, then the Maps, and last, the Processes, in succession. Or if you need to stop the upgrade, you may click the Abort Upgrade button during the conversion. The step that is currently running when you select Abort Upgrade will complete, and then the upgrade will abort. You can view the upgrade process in the Upgrade Status section. The Upgrade Status section will display Completed! after the Upgrade Utility finishes upgrading the Record Layouts, Maps and Processes.
1. Click Done to exit the program. Limitation When you upgrade a Transformation, the version number of the map gets reset to 1.0 (as if you were adding a new map to the database). You will need to change the version information in the Transformation and Map Properties dialog.
Engine Profiler
The Engine Profiler is a tool designed to fine tune your Transformations and Processes. There is an excellent document available at C:\Program Files\Pervasive\Cosmos\Common800\Help\engine_profiler.pdf This document goes into detail of the functionality and use of the Engine Profiler.
Data Profiler
Data Profiler is a data quality analysis tool. There is an excellent document available at: C:\Program Files\Pervasive\Cosmos\Common800\Help\ data_profiler.pdf. This document goes into detail of the functionality and use of the Data Profiler. There is also a class offering available for this tool available from the Pervasive Training Department.