Вы находитесь на странице: 1из 19

Error: 1Issue in loading kna1 table from ecc to hana through BODS

Trying to load the kna1 table from ecc to hana through BODS. I am getting error message
like "28386 115107584 JOB 5/26/2013 10:51:24 PM Job <STUDENT_KNA1> is terminated
due to error <151001>".
NOTE: I successfully loaded TCURR table to hana from ecc though same connection and data
store.

Solution: There is problem in data buffer which does not have enough space to accommodate
data.
The cause is due to:

1. The data extracted for a row in an SAP application table source is larger than 512 bytes.
2. The Data Services Remote Function Call (RFC) Z_AW_RFC_READ_TABLE is not
installed on the SAP application server.

"When you use an SAP application table as a data flow source, Data Services extracts data using
the Data Services-provided RFC (remote function call) Z_AW_RFC_READ_TABLE. The
function call extracts up to 2048 bytes per row. If this function is not loaded to the SAP
application server, Data Services extracts data using the SAP-supplied function,
RFC_READ_TABLE. This function call limits extracted data to 512 bytes per row."
Resolution: Install the Z_AW_RFC_READ_TABLE function on the SAP application server.
For more details please refer to SAP note: 1752954 regarding how to do this.
Error: 2 please assist with data services Error.
Received this error when trying to run a batch job.
Work flow <Transaction_Data> is terminated due to an error <54003>.  I have checked the data
in Hana and it doesn’t seem to have any issues.

Solution: Cannot use time stamp columns in SQL transform. To use these cloumns you have
covert these columns to Character format.
Solution: Do a conversion of these columns in SQL transform using
TO_DATE(TO_CHAR(<SCHEMA>.<COLNAME>, 'YYYY-MON-DD HH24:MI:SS'),
'YYYY-MON-DD HH24:MI:SS')
Error: 3 Error 80101
 I have a flow which uses lookup_ext function but when i run it i obtain the following error:
Job <Demo> is terminated due to error <80101>
<C:/BODS/C:/BODS/ID_Address.txt>. Please check its path and permissions.

Solution: are you using Flat File as source or target? From the error, the root dir in the file
format and also given complete path for the file
Check the file format definition, and also the values for root dir and file name in the Dataflow
if you have specified root dir, then you have to just give the file name in the filename option, at
run time DI will concatenate the root dir and file name to form the complete path of the file
in this case C:/BODS is root dir and you should give only ID_Address.txt as file name not
C:/BODS/ID_Adddress.txt
Error: 4 Error number <50664> in SAP BODS 4.2
Error message:

Just a bit of a background, we are on DS 4.2 SP4 with DB2 10.5.5 DS repository and
source/target database. We have recently modified our ETL process which is sequentially
running and converted them into parallel extractions.  Oftentimes, we encounter this error but the
affected job and data flows are random. 

Solution: there are multiple ways to have your job server configured, and one such way is
through a VM off of a mainframe. It MIGHT be possible to increase the memory allocations
without doing a physical hardware upgrade if your system is designed this way. If you have a
physical server to yourself, and you have already checked the Master Guide
(http://help.sap.com/businessobject/product_guides/sbods42/en/ds_is_42_master_en.pdf) for
references, then your server may just be peaked for performance and could require an actual
physical upgrade. There are a whole host of problems that could be going on.
One issue might be having enough CPU threads to parallel process but not enough RAM to
perform each operation simultaneously. A quick fix may be to try and set the Tools->Options-
>Job Server->Environment Maximum number of Engine Processes to a lower value. This is
going to reduce how many objects can be ran simultaneously.
Another option is to try and reduce what is being placed into memory in the first place. This
could be done by tuning your cache settings on objects that are running in parallel.
Both of these could potentially degrade performance, and are not the only options. For instance,
following Dirk Venken's optimization guide (Let the database do the hard work! Better
performance in SAP Data Services thanks to full SQL-Pushdown) is very good at explaining
some of the common ways to get pushdown to happen.
Some quick simple tips to keep in mind for this, make sure all source tables come from the same
database (you cant push down operations to a database that does not have a table there to join
against), utilize pushdown_sql() in your where clauses and use database level language to
perform the operation and not BODS language code, and try your hardest not to have more than
one target table in a single dataflow. Use another dataflow to copy the results to another location
if needed instead. And never use a source table with the same target table in the same dataflow
without a data transfer (it seems weird, but I have seem very large performance improvements
from following this rule)
Error: 5 SAP BODS error 120302
Trying to test my BODS system by creating a simple job. Where just get data from sql server and
store that in a flat file. However, whenever running the getting error 120302. Log is
6/5/2013 Reading job <cac4e0d9_70a5_4351_b9d7_9ce125f22e10>
87161628JOB 12:06:28 from the repository; Server version is <14.1.1.338>;
PM Repository version is
6/5/2013
87161628JOB 12:06:28 <14.1.1.0000>.
PM
6/5/2013 Current directory of job
87161628JOB 12:06:28 <cac4e0d9_70a5_4351_b9d7_9ce125f22e10> is
PM <D:\Program Files (x86)\SAP BusinessObjects\Data
6/5/2013
87161628JOB 12:06:28 Services\bin>.
PM
6/5/2013
Starting job on job server host <DLEXWBODS001>, port
87161628JOB 12:06:29
<3500>.
PM
6/5/2013
Job <New_Job> of runid <2013060512062987161628> is
87161628JOB 12:06:30
initiated by user <mdmproj>.
PM
6/5/2013
87161628JOB 12:06:30 Processing job <New_Job>.
PM
6/5/2013
87161628JOB 12:06:32 Optimizing job <New_Job>.
PM
6/5/2013
87161628JOB 12:06:32 Job <New_Job> is started.
PM
6/5/2013
87161628WORKFLOW12:06:32 Work flow <New_WorkFlow> is started.
PM
6/5/2013
68406652DATAFLOW 12:06:33 Process to execute data flow <New_DataFlow> is started.
PM
6/5/2013
68406652DATAFLOW 12:06:35 Data flow <New_DataFlow> is started.
PM
6/5/2013 Cache statistics determined that data flow <New_DataFlow>
68406652DATAFLOW 12:06:35 uses 0 caches with a total size of 0 bytes, which is less than
PM (or
6/5/2013
equal to) 3757047808 bytes available for caches in virtual
68406652DATAFLOW 12:06:35
memory. Data flow will use IN MEMORY cache type.
PM
6/5/2013
68406652DATAFLOW 12:06:35 Data flow <New_DataFlow> using IN MEMORY Cache.
PM
6/5/2013
Data flow <New_DataFlow> is terminated due to error
68406652DATAFLOW 12:06:36
<120302>.
PM
6/5/2013
Process to execute data flow <New_DataFlow> is
68406652DATAFLOW 12:06:36
completed.
PM
6/5/2013
Work flow <New_WorkFlow> is terminated due to an error
87161628WORKFLOW12:06:36
<120302>.
PM
6/5/2013
87161628JOB 12:06:36 Job <New_Job> is terminated due to error <120302>.
PM
While the error log says:
871 162 CON- 6/5/2013
|Data flow New_DataFlow
6 8 120302 12:06:36 PM
ODBC call <SQLDriverConnect> for data source
871 162 CON- 6/5/2013
<msqlp01.pvi.com> failed: <[Microsoft][ODBC SQL Server
6 8 120302 12:06:36 PM
Driver][SQL Server]Login
871 162 CON- 6/5/2013
failed for user 'NA\mdmproj'.>. Notify Customer Support.
6 8 120302 12:06:36 PM
Does that mean my username is not working, but since I logged in using that username, it should
not be the problem? I am also able to check the values of the table in the database. So, I believe
this is not database authentication problem.
And since I am storing the result as a flat file in my local machine. That should not be a problem
either.

Solution: BODS Error 120302 is generally triggered coz of the improper credentials in your
Data store.
Please go to Source SQL Datastore and cross check the credentials make sure they are in correct
format
Error: 6 SAP BODS error capture
I am loading several million to billions of rows of records into a single table.  I wanted the load
errors to write to a database table together with all the values for the data in the source row.  The
correctly loaded rows can load without writing anything to a log.  How can this be achieved?  
Also need to write logic to skip a row if it already exists by checking the primary key.  

Solution: There’s no option to write directly into the database table


The workaround is to use 'Error handling' section on the permanent table. It would allow you to
write the error records into a flat file then you can load them into DB tables
Error: 7 RFC error while connecting BW and BODS
I am a novice to BW-BODS integration activities and currently am in need of help from YOU
experts here in this forum.
In my current project, we have to integrate our BW system with BODS and I took the challenge
of doing that. 
Before going into the details of the error, I would give you some info about the existing
environment.
BODS – Version is 14.0.3.273(Also BO and BODS are installed in the same server)
BW – Version is 701 SP5
I have done the following steps till now:
a) Since we don’t have BW7.3, we created a new external source in BW.
We followed the steps: RSA1; Source systems; External system ; create ; Gave logical system
and source system names ; In the next RFC destination screen ; gave a program id(say
SAPBODS), Gateway host; IP address of the BODS server, Gateway service as sapgw00
b) In the BODS management console we gave the following credentials:
RFC program Id as SAPBODS (same name which was given in BW)
Username - the one which I give to log into SAP BW
PWD - the one which I give to log into SAP BW
SAP Application server name - the IP address which I got from the properties of BW server in
logon pad
Client number - the number which I got from the properties of BW server in logonpad
System number - the number which I got from the properties of BW server in logonpad
SAP Gateway Hostname - the same IP which I gave for the Sap application server name
SAP Gateway service name - sapgw00 (the same name which I have given in BW earlier)
Now when I ran the above created RFC service interface in BODS ; It’s showing as STARTED.
Now when I do the connection test of the RFC destination in SM59 (which I created initially in
step a), getting the connection error (as below).
PS: The BODS/BO server is running on Windows platform and the BW OS is HP-UX and
Oracle DB.
The basis team here is also looking into the issue. Parallely, I just want to ensure that I have not
missed out any basic steps in between.
I have been searching lot of forums including this to find a solution for this but unfortunately I
could not get a suitable solution. So thought of taking your help……Please let me know if
somebody can help me out with a suitable solution for my issue. Did I miss out any steps as am
afraid am new to this integration…. 

 
Solution: finally found the solution for the error.
The simple summary of the error solution is when you give the credentials for creating a external
source system in RSA1,better dont give the gateway host and
Gateway service if you are not 100% sure about it(in simple words, keeps them blank)
The system itself will manipulate it(default gateway value) with the program id, provided you
give the same(case sensitive) in both the ends(BW and BODS)
Once again thanks for all your support. Would like to close this issue here.
In case if someone faces any issue concerning this integartion, please be kind enough to let us
know about the same,because we had done a lot of iterations/research before reaching this final
solution and might be in a position to help you out.
Error: 8While loading data to SAP HANA from ECC using BODS I am getting error.
I am very new to BODS and using it first time only. When I am loading data using BODS I am
getting following error:

I am aware that it is failing because of Chinese char. I tried to find out the issue  by myself and I
goggled it out but it didn't help.

Solution:

You will find this Temp table options. Select it as yes and run the job.
Error: 9Error while loading data from ECC to HANA
Trying to load KNA1 table from ECC to SAP HANA using data services. So as process I created
the dataflow and query to load the data into my target system. Everything is fine till here. But the
moment I try loading the data into my target system in the job log it is showing an error. I am
using batch load. Please find the attachment for error.

Solution: this is related to data types in the HANA table.  Are you using a template table?  If so,
make sure you set use NVARCHAR for VARCHAR... on the target table options tab.
Error: 10BODS JOB ERROR
 
Getting the error in the job during run time only with the dataflow where using Table comparison
and History, Map and Key generation transform.
Actually, source is oracle and target is teradata. The job had been working well before.
The actual problem is with dataflow above mentioned. Other jobs are running good with the
same database ( teradata).
The mentioned dataflow works fine if I change target is oracle database.
The below error message is what I am getting
|Data flow DF_CHECK_BOFC_REJECT_RECORDS_IN_FIM|Loader
FIM_JOB_RUN_PACKAGE_STATUS_FIM_JOB_RUN_PACKAGE_STATUS SQL
submitted to ODBC data source resulted in error . The SQL submitted is . |Data flow
DF_CHECK_BOFC_REJECT_RECORDS_IN_FIM|Loader
FIM_JOB_RUN_PACKAGE_STATUS_FIM_JOB_RUN_PACKAGE_STATUS Execution of
for target failed. Possible causes: (1) Error in the SQL syntax;
(2) Database connection is broken;
(3) Database related errors such as transaction log is full, etc.;
(4) The user defined in the datastore has insufficient privileges to execute the SQL.
If the error is for preload or postload operation, or if it is for regular load operation and load
triggers are defined, check the SQL. Otherwise, for (3) and (4), contact your local DBA.

Solution: Try and run your job directly from DS, instead of from FIM.
Error: 11Data Services error while executing batch job to HANA
We have DS installed on Linux; DS designer is on local windows machine. We have installed
required HDBODBC driver on windows machine. Now when we try to create a simple flow
(source table and template table to HANA DB), we are receiving the following error
4/11/2014
3125 226044905 The initial environment locale <eng_us.utf-8> has been
JOB 2:52:28
0 6 coerced to <Unicode (UTF-16)> ().
AM
Reading job
4/11/2014
3125 226044905 <6edace34_60cc_4ed5_9dd4_1d8cddf0c5fc> from the
JOB 2:52:28
0 6 repository; Server version is <14.2.1.224>; Repository
AM
version is <14.2.1.0000>.
4/11/2014
3125 226044905
JOB 2:52:28
0 6
AM
4/11/2014 Current directory of job
3125 226044905
JOB 2:52:28 <6edace34_60cc_4ed5_9dd4_1d8cddf0c5fc> is
0 6
AM </opt/sap/dataservices/bin>.
4/11/2014
3125 226044905
JOB 2:52:28
0 6
AM
4/11/2014 Job <DF_001> of runid
3125 226044905
JOB 2:52:29 <20140411025228312502260449056> is initiated by user
0 6
AM <oraods>.
4/11/2014
3125 226044905
JOB 2:52:29 Processing job <DF_001>.
0 6
AM
4/11/2014
3125 226044905 Load odbc driver library
DB_LIB2:52:29
0 6 </usr/sap/hdbclient/libodbcHDB.so>.
AM
4/11/2014
3125 226044905 Load odbc driver library
DB_LIB2:52:29
0 6 </usr/sap/hdbclient/libodbcHDB.so>.
AM
4/11/2014
3125 226044905 Load odbc driver library
DB_LIB2:52:29
0 6 </usr/sap/hdbclient/libodbcHDB.so>.
AM

Solution: The source and the target has to be configured onthe server for each datastore in the
dataflow. Mike
Error: 12Date Convert Error in SAP BODS

My requirement, is to convert the incoming date (Format: MM/DD/YY) to YYYYMMDD.


Input date is of character type, for example 10/20/2012
We need to convert this to 20121020.
We tried using to_date (), but this gives the date as 2012.10.20 00:00:00.
Which is not acceptable?
Some of the options we tried.
To_char (to_date (tablename.fieldname, ‘MM/DD/YYYY’), ‘YYYYMMDD’))
To_date (to_char (tablename.fieldname, ‘MM/DD/YYYY’), ‘YYYYMMDD’))
To_date (tablename.fieldname, ‘YYYYMMDD’)
But If we provide as to_date (tablename.fieldname, ‘MM/DD/YYYY’), this gives the output as
YYYY.MM.DD HH:MM:SS. Which is not acceptable as we need the format as YYYYMMDD?
Attached is the screen shot for Job log

Solution: In a query, create a new output column varchar (8), the mapping will be something
like substr (source.column, 7, 4) || substr (source.column, 1, 2) || substr (source.column, 3, 2).
to_char (yyyy.mm.dd,'yyyymmdd') ,return value of this function is varchar ,output will be like
this yyyymmdd
column in schema out is date type ,so 'yyymmdd'  this type of  format is not fit in date type
columns
Error: 13Error in BODS installation with HANA
While installing BODS in windows i am facing issue like below,

I have done HANA client installation in windows and i have added environment  variable also
for HANA client.

Solution: Issue is cleared while selecting the JDBC driver path from HANA client installation.
Error: 14SAP BODS excel file open error
Creating a very basic job which reads excel as an input and produces flat file as output.
The excel is on my local machine.
But when I execute the job, I get below error.
12448 1620 FIL-080101 15/12/2014 14:05:15 Cannot open file <//Client/C$/311649/work/MLE
- Demand Forecasting/sample data files/Product Service - ProductGroup.xls>. Check
The Excel file is on your local machine. DS runs on a server. If the server has no access to your
local drive, and it normally hasn't, you'll have to copy your file.

Assuming you run DS on Windows, you basically have 2 options:


1/. Copy the file to a disk on the DS server
2/. Move your file to a share
When running DS on Linux, you will only have option 1.
You'll have to change the access path of the Excel in the data flow(s) where it is used to make
sure DS can find it the new location.
Error: 15BODS 4.2 cannot import the metadata table,
RFC_ABAP_INSTALL_AND_RUN syntax error
 
Hi all, we installed BODS 4.2 server to substitute a 4.1, but we are facing the error:
Error: Cannot import the metadata table <name=T001>
RFC CallReceive error <Function /BODS/RFC_ABAP_INSTALL_AND_RUN:
RFC_ABAP_RUNTIME_FAILURE -(Exception Key: Syntax error in program
/BODS/SAPLBODS....
We already tried the solution for when people get the error related to unicode.
Also, we are able to pull data via extractors, it only fails when loading Tables....

Solution: The problem was related to the SAP Transports, we downgraded to DS 4.1 and the
readme.txt on the transports folder is a little different for SAP NW 6.0 it reads:
“SAP NetWeaver 6.0 and later, SAP functions for Data Services are included in three SAP
components: PI_BASIS, SAP_APPL and SAP_BW.  If your system does not contain these
functions or you plan to upgrade SAP systems, see SAP Note 1919255 for information about
Data Services transport functions.”
SAP note 1919255 sends you to SAP note 1916294 which states that DS Function (/BODS) may
have issues  and that updated transport files need to be installed int the ECC for a short term
solution.

Вам также может понравиться