Вы находитесь на странице: 1из 7

Bulk-Loading in BODS using File method i.e.

SQL Loader
With two bulk – loading method in BODS i.e. API and File, this document talks about the File Method
(i.e. using SQL Loader)

If you select the File method, Data Services writes an intermediate staging file, control file, and log files
to the local disk and invokes the Oracle SQL*Loader. This method requires more processing time than
the API method.

With the File method, direct-path is faster than conventional load, but the File method is slower than
using an API because of the need to generate a staging file, logs, and invoke Oracle's SQL*Loader.

Following are the steps that needs to be performed to run the BODS DF’s using File method

Step 1: Datastore Settings


Identify the Datastore that is being used by the respective Job. Right click the datastore (for example
DS_ORACLE_EDWETL1_DS) and click on edit, click on Advanced Option, click on Edit option and the
configuration window will pop-up. In the Configuration setting, click on bulk loader directory option and
provide the path of shared directory which needs to be used as part of this method. As shown in
following screenshot in our case we are using \Saved_Output directory.

Step 2: Setting Target Tables Options


For Example, Consider a dataflow say “DF_STG_INV_GL_ACCT_HIER“
Double click on target table STG_INV_GL_ACCT_HIER  Click on bulk loader option  Select bulk load
option as File.

Mode: Specify the mode for loading data in the target table

 Rows per commit: Specifies the transaction size in number of rows or bulk loading. If Rows per
commit is set to 1000, a commit is sent to the underlying database every 1000 rows. If you do not enter
a value, the default (1000) is used.

Maximum rejects: Enter the maximum number of error records allowed before the job is terminated.
If you do not enter a value, the default (10) is used.

SQL *Loader version: The version used to load data into the table. The version of the Oracle
SQL*Loader and the database (specified in the datastore for the target) must match.

Text delimiter: Enter the character used to delimit char or varchar columns. The default character is a
double quotation mark (“). Make sure the character you enter is not used in any of the data columns.

 Field delimiter: Enter the character used to separate columns. The default character is a comma (,).
Make sure the character you enter is not used in any of the data columns. You can specify a non-
printable character by entering the ASCII equivalent, in our example we using /127

 Maximum Bind Array: Enter the maximum bind array. The bind array needs to be large enough
to contain a single row. For good performance, make this large enough to hold 100 rows. If you
do not enter a value, the default Oracle Bulk Loader value is used. In the example we are using
2560000.

Use the control file: Select this check box to load data from a specific bulk loading control file and
data file. Rather than loading data from the source shown in the data flow, Data Services directs Oracle
to load data from the data file associated with the named control file.

Generate files only: Select this check box to have Data Services generate a data and control file.
Rather than loading data into the target shown in the data flow, Data Services generates a control file
and a data file that you can later load using Oracle bulk loading.

 Direct Path: Select this check box to specify a direct-path load. To use direct-path load, the
version of SQL*Loader available to the Job Server executing the job must be the same as the
target database version. For example, you cannot perform a SQL*Loader Version 7.1.2 direct
path load to load into a Oracle Version 7.1.3 database. For more information, see the Oracle
server documentation.

 Clean up bulk loader directory after load: Select this check box to have Data Services delete all
bulk loader-related files (control file, datafile, log file) after the load is complete. If an error
occurs during the bulk load, Data Services creates a .bad file and does not delete any files. Errors
occur when: Log file was not created or Log file contains “ORA-” or “SQL*Loader-”
 Trailing nullcols: Select this check box to indicate that columns not represented in the data being
loaded should be treated as null columns. Use when a data record is not complete but the existing data
needs to be loaded. If this option is not selected, the system generates an error.

Step 3:

After completing step 2, execute the Job

Step 4:
After successful execution of Job without any bad records, 3 files will get generated in directory for
each DF’s as shown in following screenshot-
STG_INV_GL_ACCT_HIER12245_1512067171_1_0.ctl
STG_INV_GL_ACCT_HIER12245_1512067171_1_0.dat
STG_INV_GL_ACCT_HIER12245_1512067171_1_0
After successful execution of Job with a bad records, one extra file (.bad) gets generated along with
above mentioned 3 files(.ctl, .dat and .log)
- .bad file contains all rejected records.

Following are the details of the tests that were done using the File Method

Test 1

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output\EDW_SqlLoader_Jira


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: Jira
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_1
DataFlow Name: DF_JIRA_ISSUESTATUS
Test Result: Success
Comments: Job executed with 80 records within 6 secs
Next Steps/Solution: NA
Test 2

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output\EDW_SqlLoader_Jira


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: Jira
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_2
DataFlow Name: DF_JIRA_COMPONENT
Test Result: Fail
Comments: Job executes successfully in API mode but fails in FILE mode. Team investigated the error to
find the root cause but could not find any reason for failure.
Next Steps/Solution: Need to check with SAP

Test 3

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: Oracle
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_3
DataFlow Name: DF_STG_QLIKVIEW_BACKLOG_EXTRACT
Test Result: Success
Comments: Job executed successfully with 403402 records loaded in 36 sec.
Next Steps/Solution: NA

Test 4

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: Oracle
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_4
DataFlow Name: DF_STG_INV_ACCT_DETERMINATION
Test Result: Success
Comments: Job executed successfully with 16500 records loaded in 7 sec.
Next Steps/Solution:
Test 5

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: SAP
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_5
DataFlow Name: DF_LFA1
Test Result: Success
Comments: Job executed successfully with 36321 records loaded in 34 sec.
Next Steps/Solution:

Test 6

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: SFDC
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_6
DataFlow Name: DF_SFDC_ENGAGEMENT_LINE_ITEM__C
Test Result: Success
Comments: Job executed successfully with 122534 records loaded in 12 mins.
Next Steps/Solution:

Test 7

Bulk Loading Directory: \\fcna11\cognosdev\Saved_Output


Repository Name: DS_DEV_REPO_003
Project Name: PROJ_BULK_LOADING
Source: Flat File
Destination: Oracle
Job Name: JOB_BULK_LOADING_WITH_FILE_7
DataFlow Name: DF_FILE_TO_ORACLE
Test Result: Success
Comments: Job executed successfully with 120 records loaded in 6 secs.
Next Steps/Solution:
Test 8

Flat File targets do not have Bulk load option

Test 9

SQL Server target table do not have File (SQL Loader) Bulk Load option

Вам также может понравиться