Вы находитесь на странице: 1из 18

R12 Applications File System

Hi,
Few days back I have posted about the 3-tiers 1)Destop 2)Application 3)Database,
I'm continuing with R12 application filesystem in this post.
II)Applications File System Overview
The Top-Level R12-Applications Directory Structure is divided into 3 parts:
1)Database Server files
2)Instant specific files

3)Application tier Server


1)Database Server files(db):
The db/apps_st/data directory is located on the database node machine, and contains the system
tablespaces, redo log files, data tablespaces, index tablespaces,and database files
The db/tech_st/10.2.0 directory is located on the database node machine, and contains the
ORACLE_HOME for the Oracle10g database
The apps/apps_st/appl (APPL_TOP) directory contains the product directories and
files for Oracle Applications.
The apps/apps_st/comn (COMMON_TOP) directory contains Java classes, HTML
pages, and other files and directories used by multiple products
The apps/tech_st/10.1.2 directory contains the ORACLE_HOME used for the
Applications technology stack tools components.
The apps/tech_st/10.1.3 directory contains the ORACLE_HOME used for the
Applications technology stack Java components.
2)Instant specific files(inst):
Instance Home (INST_TOP):
*Oracle Applications Release 12 introduces the concept of a top-level directory for an Applications
instance. This directory is referred to as the Instance Home, and denoted by the environment variable
$INST_TOP.
*Using an Instance Home provides the ability to share Applications and technology.
Notable features of this architecture include:
The latest version of Oracle Containers for Java (OC4J), the successor to JServ, is
included in Oracle Application Server 10.1.3.
All major services are started out of the OracleAS 10.1.3 ORACLE_HOME.
The Applications modules (packaged in the file formsapp.ear) are deployed into the
OC4J-Forms instance running out of the OracleAS 10.1.3 ORACLE_HOME, while the frmweb executable
is invoked out of the OracleAS 10.1.2 ORACLE_HOME.
*stack code among multiple instances, for example a development instance and a test instance.
*Support for read-only file systems and centralization of log files.
3)Application tier Server files(apps):
ORACLE_HOMEs:
There are 3 ORACLE_HOMEs in the Architecture of R12:
One ORACLE Database 10g rel2( Oracle 10.2.0 Home) &
Two ORACLE Application Server(OracleAS Homes)
*Use of Two Oracle Application Server ORACLE_HOMEs in Release 12
Two different Oracle Application Server (OracleAS) 10g releases, in separate ORACLE_HOMEs, are
used in Oracle Applications Release 12.
*This enables Oracle Applications to take advantage of the latest Oracle technologies.
The Oracle Application Server 10.1.3 ORACLE_HOME (sometimes referred to as
the Web or Java ORACLE_HOME) replaces the 8.1.7-based ORACLE_HOME
provided by Oracle9i Application Server 1.0.2.2.2 in Release 11i).

Oracle R12 Application Architecture

Hi,
We can understand Oracle R12 Application by dividing it into 3 major components:
I)Oracle Applications Architecture
II)Applications File System Overview
III)Applications Database Organization
I)Oracle Applications Architecture:
Let us see the structure in this post.
Oracle Application can be divided into
1)Desktop Tier
2)The Application Tier
3)The Database Tier
I)Desktop Tier:
*The client interface is provided through HTML for HTML-based applications, and via a Java applet in a
Web browser for the traditional Forms-based applications.
*In Oracle Applications Release 12, each user logs in to Oracle Applications through the E-Business Suite
Home Page on a desktop client web browser as shown in fig.1.
*The E-Business SuiteHome Page as shown in fig.1 provides a single point of access to HTMLbasedapplications,Forms-basedapplications, and Business Intelligence applications.

II)The Application Tier:


*The application tier has a dual role: hosting the various servers and service groups thatprocess the
business logic, and managing communication between the desktop tier andthe database tier. This tier is
sometimes referred to as the middle tier.
*3 servers or service groups comprise the basic application tier for Oracle Applications:
1)Web services:The Web services component of Oracle Application Server processes requests received
over the network from the desktop clients.
2)Forms services:Forms services in Oracle Applications Release 12 are provided by the Forms listener
servlet or Form Socket mode,which facilitates the use of firewalls,load balancing,proxies and other
networking options.
3)Concurrent services:Processes that run on the Concurrent Processing server are called concurrent
requests.A concurrent manager then reads the applicable requests in the table and starts the associated
concurrent program
III)The Database Tier:
*The database tier contains the Oracle database server, which stores all the data maintained by Oracle
Applications. The database also stores the Oracle Applications online help information.
*More specifically, the database tier contains the Oracle data server files and Oracle Applications
database executables that physically store the tables, indexes, and other database objects for your
system.
* The database server does not communicate directly with the desktop clients, but rather with the servers
on the application tier, which mediate the communications between the database server and the clients.

How FNDLOAD Utility is useful for Oracle Apps DBA


Hi,
FNDLOAD(Generic Loader) utility is very useful for Apps DBA.Let us try to understand how it works and
how we can utilize this utility well.
Understanding FNDLOAD utility:
The Generic Loader (FNDLOAD) is a concurrent program that can download data from an application
entity into a portable,editable text file. This file can then be uploaded into any other database to copy the
data.
Data structures supported by the Loader include master- detail relationships and foreign key
relationships.
FNDLOAD uses script to ensure consistent migration of objects within Oracle Applications
FNDLOAD utility Modes of Operation:
The FNDLOAD(Generic Loader) utility operates in 2 modes:
1)Download mode or
2)Upload mode
In the download mode data is downloaded from a database according to a configuration (.lct) file and then
converts the data into a Data (.ldt) file. This data file can be uploaded to a different database.In both
downloading and uploading, the structure of the data involved is described by a configuration file.
The configuration file describes the structure of the data and also the access methods use to copy the
data into or out of the database.
The same configuration file may be used for both uploading and downloading.
When downloading,the Generic Loader creates a second file, called the data file that contains the

structured data selected for downloading.


The data file has a standard syntax for representing the data that has been downloaded.
When uploading,the Generic Loader reads a data file to get the data that it is to upload.In most cases, the
data file was produced by a previous download, but may have come from another source.
The data file cannot be interpreted without the corresponding configuration file available.
FNDLOAD utility syntax:
DOWNLOAD COMMAND SYNTAX:
FNDLOAD 0 Y DOWNLOAD <${FND_TOP}/patch/115/import/
UPLOAD COMMAND SYNTAX:
FNDLOAD 0 Y UPLOAD <${FND_TOP}/patch/115/import/
FNDLOAD Usage Details:
FNDLOAD can be used to migrate the following system administrator objects between instances
1.Printer Styles
2.Lookup Types and codes
3.Descriptive Flexfield (DFF)
4.Key Flexfield (KFF)
5.Concurrent programs with the parameters
6.Request Sets (when the programs are not triggered based on success)
7.Value Sets and Value set Values
8.Profiles
9.Request Groups
10.Responsibilities
11.Forms
12.Functions
13.Menus
14.Messages
Merits of FNDLOAD utility:
1.Need to maintain a baseline environment (Source for Clone) and update it on a regular basis
2.Base environment can have issues and the cloning strategy made totally ineffective. In such a case
every new environment created will have to be updated with a lot of changes
3.Cloning /Refresh not possible in short intervals
4.Selective replication of setups and AOL objects not possible with cloning
5.Environments not delivered as per timeline and affecting the Testing schedules
6.Manually maintaining Environments at different level of configuration is tedious
7.Time consuming to manually update multiple environment with defect fixes and error prone
8.FNDLOAD is fully supported and recommended by Oracle for migration of FND objects. Requires 0
learning curve and 0 investment.
Demerits of FNDLOAD utility:
1.This utility can be only used for FND (System administrator) objects only.
2.Application Patching mechanisms use FNDLOAD heavily. There is a possibility of negative impact.
3.There is no validation of sensitive data that is being migrated by the FNDLOAD tool itself.

Examples of FNDLOAD utility:


Download: FNDLOAD apps/$pwd O Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct
lookup_techops_aris.ldt FND_LOOKUP_TYPE APPLICATION_SHORT_NAME="CN"
LOOKUP_TYPE="XXTEST_TECHOPS_ARIS_SITES"
Download: FNDLOAD apps/$pwd O Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct
lookup_techops_points.ldt FND_LOOKUP_TYPE APPLICATION_SHORT_NAME="CN"
LOOKUP_TYPE="XXTEST_TECHOPS_POINTS_SITES"
Download: FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_XXTEST_techops_points_procedure.ldt PROGRAM APPLICATION_SHORT_NAME="XXTEST"
CONCURRENT_PROGRAM_NAME="XXTEST_TECHOPS_POINTS_PRCS_EAST"
Download: FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_XXTEST_techops_points_load.ldt PROGRAM APPLICATION_SHORT_NAME="XXTEST"
CONCURRENT_PROGRAM_NAME="XXTEST_TECHOPS_POINTS_EAST_LOAD"
Download: FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_XXTEST_techops_points_load.ldt PROGRAM APPLICATION_SHORT_NAME="XXTEST"
CONCURRENT_PROGRAM_NAME="XXTEST_TECHOPS_POINTS_MW_LOAD"
Download: FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_XXTEST_techops_mttr_load.ldt PROGRAM APPLICATION_SHORT_NAME="XXTEST"
CONCURRENT_PROGRAM_NAME="XXTEST_TECHOPS_MTTR_LOAD"
Download: FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_XXTEST_techops_mttr_process.ldt PROGRAM APPLICATION_SHORT_NAME="XXTEST"
CONCURRENT_PROGRAM_NAME="XXTEST_TECHOPS_MTTR_DATA_PROCES"
Upload: FNDLOAD apps/$pwd O Y UPLOAD $FND_TOP/patch/115/import/aflvmlu.lct
lookup_techops_aris.ldt
Upload: FNDLOAD apps/$pwd O Y UPLOAD $FND_TOP/patch/115/import/aflvmlu.lct
lookup_techops_points.ldt
Upload: FNDLOAD apps/apps O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_xxtest_techops_points_procedure.ldt
Upload: FNDLOAD apps/apps O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_xxtest_techops_points_load.ldt
Upload: FNDLOAD apps/apps O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_xxtest_techops_points_load.ldt
Upload: FNDLOAD apps/apps O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_xxtest_techops_mttr_load.ldt
Upload: FNDLOAD apps/apps O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
concprg_xxtest_techops_mttr_process.ldt

Refreshing schemas in oracle Databases

Hi,
Schema refresh task might be regular for DBA's working on Database migration project.Schema refresh
is done to make our production Database data in sync with developmnent,test and performance
environment.
Below I'm describing one such task.Lot of time we might need to do a set of schemas so it is very
important we make a document or plan for doing this task effectively.In the below task we have 2
environments .PRODDB(production) and TESTDB(test).I'm refreshingTESTDB by taking Data
from PRODB,here only one schema is refreshed.
Source side:
Preparatory Steps:
Create directory or use an exiting directory by giving read and write permission for 'system' Database
user to use that direcotry(TEST_MIG).
SQL> grant read,write on directory TEST_MIG to system;
Grant succeeded.
SQL> alter user system identified by TESTDBdba account unlock;
PRODDB:
Step 1:Exporting the Data from the source Database(PRODDB in our case)
vi expdp_refresh_schema_sep27.sh
$ expdp system/PRODDB@PRODDB DUMPFILE=REFRESH_SCHEMA.DMP
DIRECTORY=DATA_PUMP_DIR SCHEMAS=REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log
$ nohup sh expdp_refresh_schema_sep27.sh>refresh_schema.out &
Step 2:Copying the dump file(Source Data) to Target Database server
We can use 'winscp' tool(A graphical utility for copying files from windows to linux or viceversa) or ftp or
scp or tar or rsync for coping Data from source server to target server.
Step 3:Moving Data into the target Database.
$ impdp system/TESTDBdba@TESTDB DUMPFILE=REFRESH_SCHEMA.DMP
DIRECTORY=TEST_MIG REMAP_SCHEMA=REFRESH_SCHEMA:REFRESH_SCHEMA
LOGFILE=REFRESH_SCHEMA.log
Step 4:Verify the Data in Source and Target Databases.
Note:
In oracle 11g rel2,version:11.2.0.1.0 there are about 44 Distinct object_types comparing to previous
versions this number is huge.
SQL> select *from v$version;

BANNER
-------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select distinct object_type from dba_objects;
OBJECT_TYPE
------------------EDITION
INDEX PARTITION
CONSUMER GROUP
SEQUENCE
TABLE PARTITION
SCHEDULE
QUEUE
RULE
JAVA DATA
PROCEDURE
OPERATOR
OBJECT_TYPE
------------------LOB PARTITION
DESTINATION
WINDOW
SCHEDULER GROUP
DATABASE LINK
LOB
PACKAGE
PACKAGE BODY
LIBRARY
PROGRAM
RULE SET
OBJECT_TYPE
------------------CONTEXT
TYPE BODY
JAVA RESOURCE
XML SCHEMA
TRIGGER
JOB CLASS
UNDEFINED
DIRECTORY
MATERIALIZED VIEW
TABLE
INDEX
OBJECT_TYPE
------------------SYNONYM

VIEW
FUNCTION
JAVA CLASS
JAVA SOURCE
INDEXTYPE
CLUSTER
TYPE
RESOURCE PLAN
JOB
EVALUATION CONTEXT
44 rows selected.
Source Database:
PRODDB:
--------SQL> select count(*) from dba_objects
where owner='REFRESH_SCHEMA';
COUNT(*)
---------132
SQL> select count(*) from dba_tables
where owner='REFRESH_SCHEMA';
COUNT(*)
---------34
SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
234
COUNT(*)
---------62
SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN
TARGET DATABASE:

TESTDB:
------------SQL> select count(*) from dba_objects
where owner='REFRESH_SCHE'; 2
COUNT(*)
---------131
SQL> select count(*) from dba_tables
where owner='APEX4_DEV'; 2
COUNT(*)
---------34
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
COUNT(*)
---------62

Database refresh process


Hi,
Database refresh is done by adopting various technique.But the main idea is to get the fresh production
data in your Development,Test or Performance Database environment so that Developers/QAs make use
of that data and do needful changes and testing depending upon the requirement.Below are the steps
which I followed.
REFRESH PROCESS 1:
In the refresh process 1,I have used the exp/imp utility and expdp/impdp utility to transfer the data from
source to target Database.
Steps we followed for refresh:
Step 1:Checking the tablespaces if it exists and the tablespace sizes in Source and the Destination
Databases for the below
------mentioned tablespaces:
Source:STARQA_GDC Destination:orcl(my desktop Database)
STAR02D,STAR02I,STAR01D & STAR01I
SELECT F.TABLESPACE_NAME,TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999')
"USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE

FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME
Check if these tablespaces exists,if so,check size and add if required and send me the out put of below
query in orcl.
SOURCE DATABASE:STARQA_GDC
-------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %
DESTINATION DATABASE:ORCL
----------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in
DESTINATION DATABASE(ORCL) carefully and verify the SPACE needed for the refresh,If the space is
insufficient in particular TABLESPACE,Increase the size of the Tablespace for import operation to be
successfull and hence Database refresh to be successfull.
Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sqlat SQL prompt
SQL>@Dropstarusers.sql
Step 3:Create the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/createstarusers.sql or Take the script by
going to the TOAD tool,create the users with required privileges and grants.
We need to create the 5 users STARTXN,STARREP,STARREPAPP,STARMIG & STARTXNAPP whose
Data needs to be refresh.
Step 4:Copy the export dump files to the destination location by using WINSCP utility.
Step 5:Create the directory for the Datapump import:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;

(or)
grant read,write on directory imp_dp2_Jun to system;
Since,import is done with system user.
Step 6: Importing the dumpfile in desktop:
The import scripts are:
Step of importing dumpfile for 5 users:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
Datapump import done for the dump file got from GDC team after unziping it .
impdp system/database directory=IMP_DP2_JUN dumpfile=STARMIG_210610.dmp
logfile=STARMIG_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREP_210610.dmp
logfile=STARREP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREPAPP_210610.dmp
logfile=STARREPAPP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXN_210610.dmp
logfile=STARTXN_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXNAPP_210610.dmp
logfile=STARTXNAPP_210610.log
Step 7:Verify the log files for each import
Step 8:Exporting in target Box:
starmig:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starmig_23jun10.dmp'
log=/work/Rafi/exp_orcl_starmig_23jun10.log owner=starmig statistics=none
vi imp_starmig.sh =>Add the exp script here
To run in background without interupt:
nohup sh imp_starmig.sh > a.out &
startxn:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxn_23jun10.dmp'
log=/work/Rafi/exp_orcl_startxn_23jun10.log owner=startxn statistics=none
nohup sh imp_startxn.sh > b.out &
startxnapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp'
log=/work/Rafi/exp_orcl_startxnapp_23jun10.log owner=startxnapp statistics=none
nohup sh imp_startxnapp.sh > c.out &
starrep:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrep_25jun10.dmp'
log=/work/Rafi/exp_orcl_starrep_25jun10.log owner=starrep statistics=none
nohup sh imp_starrep25.sh > r.out &
starrepapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp'
log=/work/Rafi/exp_orcl_starrepapp_23jun10.log owner=starrepapp statistics=none
nohup sh imp_starrepapp.sh > e.out &
Step 9: Importing in 36 box:
starmig:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starmig_23jun10.dmp'
log=imp_starmig_stardev23jun10.log fromuser=starmig touser=starmig
nohup sh imp_starmigDEV.sh > f.out &
startxn:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxn_23jun10.dmp'
log=imp_startxn_stardev23jun10.log fromuser=startxn touser=startxn
nohup sh imp_startxnDEV.sh > g.out &
startxnapp:

imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp'


log=imp_startxnapp_stardev23jun10.log fromuser=startxnapp touser=startxnapp
nohup sh imp_startxnappDEV.sh > h.out &
starrep:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrep_25jun10.dmp'
log=imp_starrep_stardev25jun10.log fromuser=starrep touser=starrep
nohup sh imp_starrepDEV.sh > i.out &
starrepapp:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp'
log=imp_starrepapp_stardev23jun10.log fromuser=starrepapp touser=starrepapp
nohup sh imp_starrepappDEV.sh > j.out &
Step 10: Verified the objects and tables that are imported.
Connect to SQL PLUS with each user:
1)startxn:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxn Schema in STARQA_GDC Database.
2)Starrep:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrep Schema in STARQA_GDC Database.
3)Startxnapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxnapp Schema in STARQA_GDC Database.
4)Starrepapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.
5)Starmig:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.
Step 11: Compared the schemas in case of difference in the object.
Open the source and Destination Database Toad sessions separately.
If we find some schema difference Then go to toad utility and compare the schemas.Check for the objects
and tables didn't got imported.In the TOAD utility go to the DBA tab and click on Compare schemas.Click
the objects we need to compare.We will get here the objects which are less in particular Destination
schemas.
Step 12:We Make use of exp utility to export the missing objects interactively.
REFRESH PROCESS 2:
In the refresh process2,the indirect method is avoided as we came across one
parameterversion=10.2.So when you do export with this parameter you can directly import without losing
any data by using expdp and impdp utility.
Steps for the refresh from Source(11.1.0.7) to Target(11.1.0.6):(
Step 1:Checking the tablespaces if it exists and the tablespace sizes in Source and the Destination
Databases for the below

------mentioned tablespaces:
Source:STARQA_GDC Target:STARTST
STAR02D,STAR02I,STAR01D & STAR01I
SELECT F.TABLESPACE_NAME,TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999')
"USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME
Check if these tablespaces exists,if so,check size and add if required and send me the out put of below
query in STARTST.
SOURCE DATABASE:STARQA_GDC
-------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %
DESTINATION DATABASE:STARTST
----------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in TARGET
DATABASE(STARTST) carefully and verify the SPACE needed for the refresh,If the space is insufficient in
particular TABLESPACE,Increase the size of the Tablespace for import operation to be successfull and
hence Database refresh to be successfull.
Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sql at SQL prompt
SQL>@Dropstarusers.sql

Step 3:Create the users whose Data needs to be refreshed.


Use the below scripts for this. This is located in /work/Rafijobs/createstarusers.sql or Take the script by
going to the TOAD tool,create the users with required privileges and grants.
We need to create the 5 users STARTXN,STARREP,STARREPAPP,STARMIG & STARTXNAPP whose
Data needs to be refresh.
Step 4:Copy the export dump files to the destination location using WinSCP.
Note:Step 5 is only possible when expdp is done with version=10.2, since the source Database version is
11.1.0.7 and target Database oracle version is 11.1.0.6,So from higher to lower version import is only
possible when export is done with version=10.2parameter.(For expdp refer refresh method 1).
Step 5:Create the directory for the Datapump import:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
(or)
grant read,write on directory imp_dp2_Jun to system;
Since,import is done with system user.
impdp system/database directory=IMP_DP2_JUN dumpfile=STARMIG_210610.dmp
logfile=STARMIG_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREP_210610.dmp
logfile=STARREP_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREPAPP_210610.dmp
logfile=STARREPAPP_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXN_210610.dmp
logfile=STARTXN_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXNAPP_210610.dmp
logfile=STARTXNAPP_210610.log version=10.2
Step 6:Verify the log files for each import
Step 7: Verified the objects and tables that are imported.
Connect to SQL PLUS with each user:
1)startxn:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxn Schema in STARQA_GDC Database.
2)Starrep:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrep Schema in STARQA_GDC Database.
3)Startxnapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxnapp Schema in STARQA_GDC Database.
4)Starrepapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.
5)Starmig:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.

Step 8: Compared the schemas in case of difference in the object.


Open the source and Destination Database Toad sessions separately.
If we find some schema difference Then go to toad utility and compare the schemas.Check for the objects
and tables didn't got imported.In the TOAD utility go to the DBA tab and click on Compare schemas.Click
the objects we need to compare

How To Resize and/or Add Redo Logs


Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 10.2.0.4 - Release: 9.2 to 10.2
Information in this document applies to any platform.
Goal:
The purpose of this document is to demonstrate:
A. How to resize and/or add redo logs.
B. How to determine the optimal size for redo logs

Solution:
A. How to resize and/or add redo logs.
1. Review information on existing redo logs.
sql> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#
2. Add new groups
sql> ALTER DATABASE ADD LOGFILE group 4 ('/log01A.dbf', '/log01B.dbf ') SIZE 512M;
sql> ALTER DATABASE ADD LOGFILE group 5 ('/log02A.dbf', '/log02B.dbf ') SIZE 512M;
sql> ALTER DATABASE ADD LOGFILE group 6 ('/log03A.dbf', '/log03B.dbf ') SIZE 512M;

3. Check the status on all redo logs again.


sql> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#
4. Drop the online redo log groups that are not needed. You must have the ALTER
DATABASE system privilege.
Note: Before dropping an online redo log group, consider the following restrictions and
precautions:
a. An instance requires at least two groups of online redo log files, regardless of the number
of members in the groups. (A group is one or more members.)

b. You can drop an online redo log group only if it is INACTIVE. If you need to drop the current
group, first force a log switch to occur.
By using this command :
SQL> ALTER SYSTEM SWITCH LOGFILE;
c. Make sure an online redo log group is archived (if archiving is enabled) before dropping it.
This can be determined by:
GROUP# ARC STATUS
--------- -----------------1
YES ACTIVE
2
NO CURRENT
3
YES INACTIVE
4
YES UNUSED
5
YES UNUSED
6
YES UNUSED
d. Check that the group is inactive and archived before dropping it .
SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

SQL> ALTER DATABASE DROP LOGFILE GROUP 3;

e. After dropping an online redo log group, make sure that the drop completed successfully,
and then use the appropriate operating system command to delete the dropped online redo
log files

OS File Permission
The chmod command is used to alter file permissions after the file has been created.
chmod -R 777 abc
-R is to recursively change the permission.All the files in the directory have same permission.
Owner

Group

Others

Permission

7 (u+rwx)

7 (g+rwx)

7 (o+rwx)

read + write + execute

6 (u+wx)

6 (g+wx)

6 (o+wx)

write + execute

5 (u+Rx)

5 (g+Rx)

5 (o+Rx)

read + execute

4 (u+r)

4 (g+r)

4 (o+r)

read only

2 (u+w)

2 (g+w)

2 (o+w)

write only

1 (u+x)

1 (g+x)

1 (o+x)

execute only

Вам также может понравиться