Вы находитесь на странице: 1из 83

Infosys

BW Data Archiving Process of DSO in Detail


March 2012

Infosys Limited
Bangalore
Declaration

We hereby declare that this document is based on our personal experiences. To the
best of our knowledge, this document does not contain any material that infringes the
copyrights of any other individual or organization including the customers of Infosys.

Authors:
Haritha Molaka, Lakshmi Muni Nallu

Date written (MM/DD/YY): 03/16/2012

Project Details:
- Project involved : PNABIHYP
- S/W Environment : SAP BI (7.0)
- Project Type : Development

Target Readers: All


Keywords: SAP BW Data Archiving

2
Summary

• This document describes about BW Data Archiving and its methods. It shows
the Implementation of ADK method of archiving of DSO in detail and
Archiving of write optimized DSO. Also explains the various issues faced
while archiving the data and the relevant SAP notes implemented.

• References:
http://help.sap.com
http://scn.sap.com/welcome

3
Table of contents

• BW Data Archiving and its need


• Benefits of Data Archiving
• Methods of Data Archiving
• PBS, a Third Party Solution Provider
• Archiving procedure for ADK method
• Archiving of Write optimized DSO
• Using Process chain to schedule Archiving
• Using administration view of the Infoprovider for scheduling archiving
• Deleting/Changing Archiving Object
• Issues encountered while archiving the data and the relevant SAP notes
implemented
• Observations while archiving the data

4
BW Data Archiving

• Data archiving is the process of removing data from the BI database to


avoid or slow down continuous database growth.

Need of data archiving:

• Below are the challenges which caused the need for data archiving.

- Continuous growth of SAP BW database


- Reduced retrieval performance
- Growing administration and maintenance costs

• Solution is to remove data from the BW database.

5
Benefits of Data Archiving

• Reduction in the main memory and the CPU consumption which saves the
cost of administration.

• As the data is transferred to the archive storage at regular intervals,


less effort for backup and disaster recovery will be required.

• Performance improvement in the loading process in BI and query response


time is optimized.

• Reduction in the consumption of hardware resources.

6
Methods of Data archiving

Three methods are available for removing data from the SAP BI database.

• Traditional ADK (Archive Development Kit) Method

• NLS (Near Line Storage) Method

• NLS with ADK

7
Methods of Data archiving

ADK method :

• BW data from DSOs and Info cubes can be transferred directly to


compressed SAP standard ADK archive files.

• Using the Archive Link interface, these ADK files can be passed to any
storage medium or content repository .

• ADK files are not readable for BW Queries. For Query access, they have to
be reloaded.

8
Methods of Data archiving
NLS Method:

• BW data from DSOs and Info cubes can be transferred to an external nearline
system via the SAP nearline storage interface.

• Direct access to the archived data for the queries without reloading the data.

NLS with ADK Method:

• This method is the combination of both classical file-based data archiving


and archiving in a nearline system.

• This is used usually while migrating from ADK method to NLS method.

• First the ADK archive is created during the write phase of the data archiving
process. Then the archive data is transferred to the external nearline system
in the subsequent verification phase.
9
Methods of Data archiving

• Archiving Methods in SAP BI 7.0. In SAP BW 3.x, only ADK-based archiving is possible

SAP ADK Archive

ADK
1

3 ADK + NLS
BI DAP

2
NLS
NLS Interface

SAP BI DB

Third Party
Nearline system

10
Methods of Data archiving

ADK Method Nearline Method NLS + ADK Method

Transfers data to SAP Transfers data to external Transfers data to


standard archive files nearline system without external nearline system
any SAP archive files
Bw queries cannot
BW queries can access Transfers into SAP ADK
access archived data
archived data directly files
directly
Reloads from ADK,
Reload through NLS and Reloads from ADK and
updates from ADK is
updates from NLS possible updates through NLS
possible
Data access is
sequential and slow

11
PBS, a Third Party Solution Provider

• PBS is a certified partner for SAP to provide Data Archiving solutions.

• PBS provides add on named CBW for the Data archiving solution.

• CBW is a supplementary software solution which provides integrated data


access with BW queries to archived and non-archived data without restoring
the archive files into the BW database. The only prerequisite for CBW is a
successful SAP BW data archiving process in place.

• PBS includes two nearline storage interface archiving solutions for the new
SAP NetWeaver Business Warehouse 7.x.

PBS CBW NLS


PBS CBW NLS IQ.

12
PBS, a Third Party Solution Provider

PBS CBW NLS:

• This solution is based on SAP’s Archive Development Kit (ADK) technology


and is completely file based.

• PBS CBW NLS is a software solution which requires no additional hardware,


and is easy to implement.

• To have access to the archived data, the solution operates with special
archive indexes and archive aggregates that are stored in the archive system
along with archived data.

• Queries can have access to the archived data by having virtual provider and
multiprovider on top of it known as nearline providers.

13
PBS, a Third Party Solution Provider

PBS CBW NLS IQ:

• It implements column-based database technology from Sybase called Sybase


IQ Analytics Server and it needs additional hardware server to run.

• In contrast to the file-based approach, this column-based technology does not


require archive files, archive indexes, or archive aggregates and enables very
quick query response times.

• If “Read Near-Line Storage As Well“ option is selected in the query attributes,


Queries can have access to both BI database data and archive data from the
nearline system

• The only prerequisite for using PBS CBW NLS IQ is successful SAP’s purely
nearline-based data archiving process in place. The archive data is then
generated directly in Sybase IQ.

14
PBS, a Third Party Solution Provider

• Query Access to Archived data using PBS Nearline Services


BW Query
Read DB Read Archive

DB + Archive

Infocube/ Multi Virtual


Provider Provider
Data Store
Object

PBS NLS
Interface

PBS aggregate
SAP BW
Database Sybase IQ
PBS Index

SAP Archive

15
Archiving procedure for ADK method

The archiving procedure is divided into three main steps:

• Creation of archive files: In the write phase, the data to be archived is


written sequentially into newly created archive files

• Storage of archive files: The newly created archive files can then be moved
to a storage system.

• Delete from the database: In the delete phase, the delete program reads the
data from the archive files and then deletes it from the database.

• The removal to an external storage system can be triggered manually or


automatically. It is also possible to delete the data before the store phase.

16
Archiving procedure for ADK method

Here are initial steps to be done before creating Data archiving process:

• Few initial settings to be maintained in T code SARA

• Logical path and logical file to be created to store the generated ADK files in
the file system.

• Content repository to be created using T code OAC0

17
Initial settings to be maintained in T code SARA

• Click Customizing

18
Initial settings to be maintained in T code SARA
• Technical Settings under “Cross Archiving Object Customizing”

19
check & Delete Settings

Initial settings to be maintained in T code SARA

• Check & Delete Settings

check & Delete Settings

20
Initial settings to be maintained in T code SARA

• Cross-Client File Names/Paths under Basis Customizing Settings takes you


to T code ‘FILE’

21
Logical Path and Logical file creation in T code ‘FILE’

• Logical Path and Logical File have to be created here.

22
Logical Path and Logical file creation in T code ‘FILE’

• Logical path created is shown below..


• Archive directory is used for storing the archive files
• Physical path is /archive/<SYSID>/pbsarchive/<FILENAME>

23
Logical Path and Logical file creation in T code ‘FILE’
• Physical file format is
<PARAM_1>_<PARAM_3>_<DATE>_<TIME>_<PARAM_2>.ARCHIVE
Where as
• PARAM_1 Two-digit application abbreviation for classifying archive data in the system.
This value is taken from the definition of the associated archiving object.
• PARAM_2 Single digit alphanumeric numbering (0-9, A-Z). The value is assigned by the
ADK when creating a new archive file, and numbers are assigned consecutively in
ascending order.
• PARAM_3 Multiple digit alphanumeric character string. The name of the archiving
object is entered as a value at runtime. In archive management, this indicates the
nature of the data content, and also enables storage of archive files according to
archiving object.

24
Content Repository creation in T code OAC0

• Content Repository has to be created using T code OAC0

25
Content Repository creation in T code OAC0

• B2 is the content repository


created and is essential to
specify the same while creating
DAP’s.

• Test the connection by clicking


on the “Test Connection
Button”.

• If you don’t get the success


message then it indicates that
the connection between SAP
and Documentum is not
working and the settings need
to be checked again on both
SAP as well as the
documentum sides as per the
installation manuals.

26
Content Repository creation in T code OAC0

• Click on the send certificate


to send the required
certificate to the
documentum server.

• The success message


indicates that the connection
settings between SAP and
documentum are good and
data can now be transferred
between the two systems
through the archivelink.

• All the Above steps are one


time settings which hold
good for all the BW Objects.

27
Archiving procedure for ADK method

Following are the activities to be performed for data archiving:

– Creating an archiving object


– Scheduling the archiving session
– Monitoring and statistics of the archiving session
– Reloading archived data (optional)
– Reporting archived data (optional)

28
Creating an archiving object

Creating Data Archiving Process (DAP) for DSO:


• Go to Transaction : RSA1
• Right click and select “Create data Archiving Process”

29
Creating an archiving object
• ADK-Based Archiving’ option will be checked by default. Archiving object
name would be generated by the system for DSO.
• For a standard DSO, ‘Time Slice Archiving’ will be selected by default. For
write optimized DSO, ‘Request-Based Archiving’ will be selected by default.

30
Creating an archiving object
• In the selection profile tab, Primary
Partitioning Characteristic and
Additional Partitioning Characteristics
have to be selected.
• Here, ‘Characteristic for Time slices’
gives in the drop down, the list of all
date fields that are present in the DSO.
These date fields may be key or non
key fields in the DSO.
• If the selected time characteristic is not
in key fields of DSO, there are two
options again.
- It is still to be used as primary
partitioning characteristic
- time-correlated key characteristic is
to be used instead for partitioning.
• Under ‘Additional Partitioning
Characteristics’, only key fields present
in the DSO will be listed on the right
side. You can drag them to the left.

31
Creating an archiving object
• In ‘Semantic Group’ tab, you can
find the all the fields of DSO on
the right side. You can drag the
required fields to the left side.
• You can form semantic groups to
archive the data in a sorted way.
• The system reads from the
database after sorting according
to grouping characteristics (the
order is important). Records with
the same specifications in the
grouping characteristics are
written to the archive as one data
object.
• If no characteristic is selected, the
storage in the archive is not sorted
and technical criteria are used for
classification (fixed size) into data
objects.

32
Creating an archiving object

• In the ADK tab, specify the logical file


name that is created earlier i.e..,
Z_PBS_BW_ARCHIVE_PATH_CBW
• Archive File Size determines the size of
each archive file created.
• Either maximum file size in MB or
Maximum number of data objects can be
specified.
• Once the specified maximum sixe is
reached for a particular archive file, new
file will be created.
• Delete Jobs settings determine how the
delete job runs.
• Storage system settings determine
where to store the archive files and the
sequence
• Specify the content repository that is
created earlier i.e.., B2

33
Creating an archiving object

• If ‘Start Automatically’ option is checked,


store job starts automatically soon after
write job and stores the file in the content
repository.
• Otherwise, store job has to be triggered
manually.
• ‘Delete Before Storing’ option deletes
the data from the DSO once the archive
files are generated and then stores the
files to the storage system.
• ‘Store Before Deleting’ option stores the
files to the storage system and then
deletes the data from the DSO.
• ‘Delete Program Reads from Storage
System’ option has to be checked so that
delete job first verifies the stored files in
the storage system and then deletes the
data from DSO.

34
Creating an archiving object

• Now Activate the DAP and collect in


the TR

• Now right click on the DSO and


select “manage ADK archive”

• This will take you to the T code


SARA

• Click Customizing

• Under ‘Archiving Object-specific


Customizing’, double click ‘Technical
Settings’

35
Creating an archiving object

• These are the technical settings


specific to the archiving object.

• Collect them in the other TR.

• Objects to be collected in the below


order while creating DAP

1.Logical Path/File from ‘FILE’


2.Archiving object (DAP) from RSA1
3.Archiving object settings (from
SARA)

36
Scheduling the archiving session

• After creating and maintaining an Archiving Object for a DSO, next job is to
archive the data.
• Archive Administration (SARA) is the central starting point for most user
activities in data archiving, such as the planning of write and delete jobs,
storing and retrieving archive files.

37
Scheduling the archiving session

• Go to SARA

• Give the Archiving object name

• Click write

• Provide start date and spool


parameters to start the write job.

• Click maintain to create the variant

38
Scheduling the archiving session

• In the first tab you can


see primary time
characteristic which was
selected while DAP was
created.

• You can use either


relative or absolute
restrictions. If both are
given, it will take
intersection of them.

39
Scheduling the archiving session

• In ‘Further Restrictions’ tab, you


can see additional
characteristics that we selected
while creating DAP.

• If needed, you can use them


along with primary time
characteristic for restrictions.

• It is recommended to check the


option ‘Autom. Request
Invalidation’. If in case write job
fails, this will unlock the locked
data area, so that you can
archive the same data again.

• Actual archiving of the data


happens only when write job
runs in the production mode.

40
Scheduling the archiving session
• Save the variant and execute the write job. Click
on the job overview.
• Write job archives the data from the active table
(for DSO) and from the fact table (for InfoCubes)
based on the given restrictions and generates the
archive files
• Once the write job is done, In AL11, you can see
the archive files generated in the archive
directory i.e.., /archive/<SYSID>/pbsarchive

41
Monitoring and statistics of the archiving session

• In the job log, you can see the restrictions, number of records archived.

• Number of records in the BW database will be transferred as data objects to


the archive system.

42
Monitoring and statistics of the archiving session

• In the spool information


(click on ‘Display spool
list’) , you can see the
number of written data
objects and size (in MB)
for each archive file
generated.

• Soon after the write job is


completed, store job starts
automatically as we
checked the option ‘Start
Automatically’ for storage
in the DAP settings.
Otherwise, store job has to
be triggered manually from
SARA

43
Monitoring and statistics of the archiving session

• Store job stores the archive files


generated in AL11 to the archive
system (Documentum)
• Once the store job is done, there will
not be any files in AL11
• To check whether the archive file got
stored successfully in Documentum
- Go back to transaction SARA
- Click Management
- Double click on the archive file and
check for the green indicator for
the “Storage System”
- The Green indicator implies that
the archive file is stored
successfully in the storage
system.

44
Monitoring and statistics of the archiving session

• Once the data is archived and


stored in the archive system,
data has to be deleted from
the BW system using delete
job

• Go to SARA

• Click Delete

• Select the archive files


generated, start date and
spool parameters

• Execute the Delete job

45
Monitoring and statistics of the archiving session
• Delete job first verifies the stored files in the storage system and then deletes
the data from DSO.

• If there are ‘n’ archive files generated in the write job, ‘n’ parallel delete
processes will be triggered.

• For n parallel delete processes in the background, n-1 delete processes in the
delete phase are completed with 0 deleted data records. The process that
verifies its data package last starts the collective delete phase for all data
records in test mode or in production mode.

46
Monitoring and statistics of the archiving session
• Job log and spool
information for the delete
job are shown here.

• In the job log you can see


the selective deletion
conditions and the number
of records deleted from the
active table of DSO.

• Secondary indexes can be


created, if a non key field
from the DSO is used as
primary selection for
archiving. This improves
the performance of write
job and delete job.

47
Monitoring and statistics of the archiving session

• From RSA1, In the manage of DSO, you can see the archiving tab added

• In the Archiving Tab of InfoProvider management area, Archiving request


will show different status.

48
Monitoring and statistics of the archiving session

• Here are the different statuses of archiving session and their


implications
Status Implication
Before data archiving begins , selected data area of the
archiving request would be locked to prevent any
Lock Status changes.
The archived data is copied to the near-line storage or the
Copy Status archive.

In the verification phase, system checks that the write


phase was successful and that the data can be deleted
Verification Status from the infoprovider.
When the archived data is deleted from the infoprovider,
Deletion Status the archiving process is complete.
When all the steps of the archiving request are completed,
Overall Status we can no longer change the status of the request.

49
Reloading archived data

• You have the option of


reloading the archived data
into BW only in the
exceptional cases

• If you have archived the


wrong data and then you
realized it immediately, you
can use this reload option so
that data will be loaded back
into DSO.

• Go To SARA

• From the main menu ‘Goto’ -


> Reload

50
Reloading archived data

• Create a variant and select the


archive files to be reloaded, provide
start date and spool parameters

• Execute the reload job

51
Reloading archived data

• Once the reload job is done, data will


be reloaded to the DSO. You can see
another unlocked request added in the
archive tab in manage of DSO.

• Here is the reloaded request


‘3,184,122’ for the corresponding
archiving request ‘3,184,010’ for the
selection condition FISCVARNT = 'Z1'
AND FISCPER = '2006001'

52
Reporting Archived Data

• CBW Administration Cockpit


• Go to transaction /PBS/CBW or program /PBS/CBW_INIT

53
Reporting archived data

• PBS provides solution i.e.., PBS CBW NLS which operates with special
archive indexes and archive aggregates and enables the queries to have
access to the archived data along with online data.

• Using CBW Administration Cockpit, you can maintain and generate the
indexes and aggregates upon the archived data

• Indexes and aggregates are generated by the CBW in compressed files


outside of the BW database i.e.., in the archive system along with archived
data.

• You can generate the virtual infoprovider (VXXXX) which contains the
archived data.

• You can also generate multiprovider (MXXXX) which consists of both virtual
infoprovider and the actual infoprovider. So multiprovider contains both
archived data and online data

54
Reporting archived data

• You can also copy BW queries onto generated infoproviders.


If there are queries existing upon actual infoprovider, they will get copied upon
virtual provider and also multiprovider.

• In this way queries will get access to both archived data and online data.

• It also provides SAP archive file browser through which you can browse the
archived data.

55
Archiving of Write optimized DSO

• Request Based Archiving

• Relative time selection will be defined in order to


derive a "Less than" selection for Request ID
based on either
- Request creation date/Request loaded date
- Absolute time characteristic from the DSO

• Archiving takes place


Requests currently
strictly in the increasing order
being archived/restored
of Request ID

Archived Requests

• Consequently restore must take place strictly in


the decreasing order of Request ID

56
Archiving of Write optimized DSO

• Selection Profile depends on DSO property "Do Not Check Uniqueness of


Data“.

Property is set (i.e. uniqueness need not to be checked)


• Request loaded Date/Request created Date can be used as time slice
characteristic (Faster and preferred).
• All the requests that are loaded on/before Request loaded Date/Request
created Date will be archived.

Property is not set (= default)


• An absolute time characteristic or a time-correlated partitioning characteristic
from the semantic key must be chosen in order to support check of
uniqueness by range locks.

57
Archiving of Write optimized DSO

• When Property is set (i.e. uniqueness need not to be checked) for the DSO
and Request loaded date is used in the DAP, selection profile of DAP looks
like below.

58
Archiving of Write optimized DSO

• Restrictions in the variant page looks as


shown

• You can see only one tab ‘Primary Time


restriction’ and additional restrictions tab
will not be there.

• Also you can have only relative


restrictions as shown, but not absolute.

• Based on the relative selections given,


request based archiving happens.

59
Archiving of Write optimized DSO

• When property is not set (i.e. uniqueness needs to be checked) for the DSO,
selection profile of DAP looks like below.
• An absolute time characteristic or a time-correlated partitioning characteristic
from the semantic key must be chosen.

60
Archiving of Write optimized DSO

• Restrictions in the variant page looks as


shown
• You can see only one tab ‘Primary Time
restriction’ and additional restrictions tab
will not be there.
• Along with request loaded date, you can
see primary characteristic i.e..,
Requisition Number
• Based on the restrictions given, selection
conditions will be created based on
primary characteristic i.e.., Requisition
Number as shown below.

61
Archiving of Write optimized DSO

• In ‘Semantic Group’ tab,


0REQUEST, 0DATAPAKID
will be added by default. You
cannot add other fields.

62
Using process chain to schedule archiving
• Below is the process chain created to schedule the archiving and deleting
processes using the process type ‘Archive Data from an Infoprovider’

63
Using process chain to schedule archiving

• Below is the archiving process variant maintained using the process type
‘Archive Data from an Infoprovider’

64
Using process chain to schedule archiving

• Below is the delete process variant maintained using the same process type
‘Archive Data from an Infoprovider’

65
Using administration view of the Infoprovider for scheduling
archiving

• You an also start the archiving


session from archiving tab of
manage area of infoprovider.

• Click on ‘Create Archiving


Request’.

• Variant page pops up and you


can run the job in either
background or dialog mode.

• Once the write job is done, you


can see the incomplete request
added with copy status as green
and overall status as yellow.

66
Using administration view of the Infoprovider for scheduling
archiving
• To start the delete
job, double click on
the request.

• Pop up comes where


you can start the
delete job either in
background or dialog.

• Once the delete job is


done, overall status
turns green.

67
Deleting/Changing Archiving Object

• When changing/deleting an archiving object for DSO, the following


restrictions exist:

• If data has already been archived, Primary and additional partitioning


characteristics selected in the DAP cannot be changed.

• If data has already been successfully archived, it is always recommended to


delete an archiving object only after reloading the archived data into the
infoprovider.

68
Deleting/Changing Archiving Object

• So If you want to change the primary/additional partitioning characteristic in the DAP for
a DSO which is already transported and archived in the target system, following has to
be done to migrate the new changes through transports:

• Make sure all the archiving sessions for the DSO are cleaned up in both the source
and the target systems.
- If there are completed archiving sessions (green requests), reload the archived
data into the DSO
- If there are incomplete archiving sessions (red/yellow requests), invalidate the
requests so that there will not be any locked data areas.

• Delete the old DAP in the source system, collect it in a transport request and migrate
it to the target system so that old DAP will be deleted in the target system.

• Create a new DAP using new primary/additional partitioning characteristic, collect it in


a transport request and migrate it to the target system

69
SAP Notes Implemented

Below are the SAP notes that we have implemented as and when we faced the
issues while archiving the data.

• 0001484606 P25: DSO: Archiving WO DSO and data mart update


• 0001531529 Parallel delete reports terminate in test/production mode
• 0001559745 Termination occurs in verification phase with error RSDA 001
• 0001628437 P28:WO-DSO:archiving: Date selection: Without last day
• 0001636570 Delete job with transaction SARA terminates with RSDA 008

70
Issues encountered while archiving the data

Issue:
• If there are ‘n’ archive files generated in the write job, ‘n’ parallel delete processes will
be triggered.
• In production mode, n-1 parallel delete processes terminate with message RSDA 213
and only the process that successfully verifies its data package last, starts the delete
phase.

Resolution:
• Below SAP note is applied to resolve this issue.

• SAP Note 1531529 - Parallel delete reports terminate in test/production mode

• For n parallel delete processes in the background, n-1 delete processes in the delete
phase are completed with 0 deleted data records. The process that verifies its data
package last starts the collective delete phase for all data records in test mode or in
production mode.

71
Issues encountered while archiving the data

• For e.g., for the 5


archive files generated,
you can see 5 delete
jobs started.

• 4 delete jobs are


completed with 0
deleted data records.

• The last 1 delete job


starts the collective
deletion of all data
records.

72
Issues encountered while archiving the data
Issue:

• While running process chain for archiving, archiving process goes fine but the delete
process fails with below error message. When the failed process is repeated, it goes
fine.

• “Exception condition in row 204 of include


CL_RSDA_ARCHIVE_READER_ADK====CM001 (program
CL_RSDA_ARCHIVE_READER_ADK====CP)
General error in BI archiving; see long text
Exception condition in row 170 of include
CL_RSDA_ARCHIVING_REQUEST=====CM00X (program
CL_RSDA_ARCHIVING_REQUEST=====CP)”

Resolution: Issue is resolved when below SAP notes are applied.

• SAP Note 1559745 - Termination occurs in verification phase with error


RSDA 001
• SAP Note 1636570 - Delete job with transaction SARA terminates with RSDA 008

73
Issues encountered while archiving the data

Issue:

• If requests of a write-optimized Data Store object (DSO) are archived, you can enter a
date as a selection. All requests with a loading date earlier or the same as this selection
should be archived.
However, currently only requests with an earlier loading date are archived. The
requests that are loaded on the selection day are not archived.

Resolution:

This issue is resolved when below SAP notes are applied:

• SAP Note 1484606 - P25: DSO: Archiving WO DSO and datamart update

• SAP Note 1628437 - P28:WO-DSO:archiving: Date selection: Without last day

74
Issues encountered while archiving the data

• For e.g., Here is the relative selection


given in the variant page
loading date <= 7/15/2009

• Even then, only the requests with loading


date < 7/15/2009 are archived and the
requests with loading date = 7/15/2009 are
not archived.

• Once the sap note is applied, all the


requests with loading date <= 7/15/2009
are archived.

75
Observations while archiving the data
• It is not recommended to start new archiving session when there is existing incomplete
archiving session.
• As long as there are incomplete archiving sessions, there is the risk that data is
archived more than once as not all archived data was processed by the relevant delete
program

• Here Request
2,990,501 is the
incomplete archiving
session for the
selection condition
FISCVARNT = 'Z1'
AND FISCPER =
'2011012‘
• The data area will be
locked once the write
job is completed.

76
Observations while archiving the data

• Once the data is archived for a particular selection, the data area will be locked for any
further changes. You cannot archive the same data area again.
• Here is write job log details showing that ‘selected area already completely archived’
when the same data is archived again i.e.., FISCPER = 2011012’ in this case

77
Observations while archiving the data

• You also have the option of ‘Request Invalidation’ for the incomplete archiving session,
if you have archived the wrong data or if you want to archive the same data area again.
• Once the request is invalidated, the request turns red and the particular data area will
be unlocked so that the same data area can be archived again.
• Select the request and double click on it. Select ‘Set Invalid’ option and execute the job
in dialog or background.

78
Observations while archiving the data
• Once the job is done, here is the request 2,990,501 turned red and the lock removed for
the selection condition ‘FISCVARNT = 'Z1' AND FISCPER = '2011012‘’ so that this
particular data area can be archived again.

• ‘Request invalidation’ option is available only for incomplete archiving sessions (yellow
requests) but not for completed sessions (green requests).

• Data load to the infoprovider fails because of incomplete/unsuccessful archive.


Data cannot be loaded into the infoprovider, when an archiving session is cancelled or
is in ‘Incomplete’ status.

• To resolve this issue, the archiving request has to be invalidated so that data can be
loaded into the infoprovider.

• So it is always recommended to enable the option ‘Autom. Request Invalidation After


Error’ while running the write job so that if in case archiving session is cancelled,
request will be invalidated automatically.

79
Observations while archiving the data

• If the “Start automatically” option is checked


for delete jobs on the ADK tab, delete job
runs in the test mode for all the below three
scheduling methods.
- A process chain
- The administration view of the Info
provider in the RSA1
- ADK archive administration
(T code SARA)

• When write is run using SARA, an option


“Delete with Test Variant" appears in the
variant page of the write program. Ideally,
based on this option checked or unchecked,
delete job that is scheduled for the write
phase should run in test mode or production
mode. But actually, it always runs in the test
mode only.

80
Observations while archiving the data

• Based on the size of the infoprovider, selection conditions, number of records and
memory space allocated to user id with which archiving is done, the jobs may get
cancelled throwing below memory dumps after running for long time.
' EXPORT_TOO_MUCH_DATA'
• In such case, you may have to reduce the selection criteria/number of records for
archiving and run it in chunks.
• For example, for a huge DSO (contains huge data in each period) if 0FISCPER is used
as primary characteristic for archiving, write job for even one period may get cancelled
throwing memory dump.
• In such a scenario, we may have to change the primary partitioning characteristic from
0FISCPER to some other time characteristic (for e.g.., Posting date) as system is not
able to accommodate archiving of 1 fiscal period at a time.
• Otherwise Secondary index can be created for the DSO on primary partitioning
characteristic so that performance of write and delete jobs improve.
• Now using the other time characteristic (for e.g.., Posting date), data can be archived in
chunks (lesser number of records) by reducing the selection criteria (For e.g.. 7 to 15
days)

81
Observations while archiving the data

Write Optimized DSO:

• As mentioned earlier, archiving for Write optimized DSOs follow request based archiving
as opposed to time slice archiving in standard DSO. This means that partial request
archiving is not possible; only complete requests have to be archived.

• Based on the selection conditions, only the complete requests that fall under the
selected data area will be archived. Even if the partial request falls under the selected
data area, it will not be archived.

• You can archive only the requests that were retrieved from all data targets (by data
mart). If there is no data mart status for a request, you cannot archive it.

82
Thank You

Вам также может понравиться