Академический Документы
Профессиональный Документы
Культура Документы
Hi,
Few days back I have posted about the 3-tiers 1)Destop 2)Application 3)Database,
I'm continuing with R12 application filesystem in this post.
II)Applications File System Overview
The Top-Level R12-Applications Directory Structure is divided into 3 parts:
1)Database Server files
2)Instant specific files
Hi,
We can understand Oracle R12 Application by dividing it into 3 major components:
I)Oracle Applications Architecture
II)Applications File System Overview
III)Applications Database Organization
I)Oracle Applications Architecture:
Let us see the structure in this post.
Oracle Application can be divided into
1)Desktop Tier
2)The Application Tier
3)The Database Tier
I)Desktop Tier:
*The client interface is provided through HTML for HTML-based applications, and via a Java applet in a
Web browser for the traditional Forms-based applications.
*In Oracle Applications Release 12, each user logs in to Oracle Applications through the E-Business Suite
Home Page on a desktop client web browser as shown in fig.1.
*The E-Business SuiteHome Page as shown in fig.1 provides a single point of access to HTMLbasedapplications,Forms-basedapplications, and Business Intelligence applications.
Hi,
Schema refresh task might be regular for DBA's working on Database migration project.Schema refresh
is done to make our production Database data in sync with developmnent,test and performance
environment.
Below I'm describing one such task.Lot of time we might need to do a set of schemas so it is very
important we make a document or plan for doing this task effectively.In the below task we have 2
environments .PRODDB(production) and TESTDB(test).I'm refreshingTESTDB by taking Data
from PRODB,here only one schema is refreshed.
Source side:
Preparatory Steps:
Create directory or use an exiting directory by giving read and write permission for 'system' Database
user to use that direcotry(TEST_MIG).
SQL> grant read,write on directory TEST_MIG to system;
Grant succeeded.
SQL> alter user system identified by TESTDBdba account unlock;
PRODDB:
Step 1:Exporting the Data from the source Database(PRODDB in our case)
vi expdp_refresh_schema_sep27.sh
$ expdp system/PRODDB@PRODDB DUMPFILE=REFRESH_SCHEMA.DMP
DIRECTORY=DATA_PUMP_DIR SCHEMAS=REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log
$ nohup sh expdp_refresh_schema_sep27.sh>refresh_schema.out &
Step 2:Copying the dump file(Source Data) to Target Database server
We can use 'winscp' tool(A graphical utility for copying files from windows to linux or viceversa) or ftp or
scp or tar or rsync for coping Data from source server to target server.
Step 3:Moving Data into the target Database.
$ impdp system/TESTDBdba@TESTDB DUMPFILE=REFRESH_SCHEMA.DMP
DIRECTORY=TEST_MIG REMAP_SCHEMA=REFRESH_SCHEMA:REFRESH_SCHEMA
LOGFILE=REFRESH_SCHEMA.log
Step 4:Verify the Data in Source and Target Databases.
Note:
In oracle 11g rel2,version:11.2.0.1.0 there are about 44 Distinct object_types comparing to previous
versions this number is huge.
SQL> select *from v$version;
BANNER
-------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select distinct object_type from dba_objects;
OBJECT_TYPE
------------------EDITION
INDEX PARTITION
CONSUMER GROUP
SEQUENCE
TABLE PARTITION
SCHEDULE
QUEUE
RULE
JAVA DATA
PROCEDURE
OPERATOR
OBJECT_TYPE
------------------LOB PARTITION
DESTINATION
WINDOW
SCHEDULER GROUP
DATABASE LINK
LOB
PACKAGE
PACKAGE BODY
LIBRARY
PROGRAM
RULE SET
OBJECT_TYPE
------------------CONTEXT
TYPE BODY
JAVA RESOURCE
XML SCHEMA
TRIGGER
JOB CLASS
UNDEFINED
DIRECTORY
MATERIALIZED VIEW
TABLE
INDEX
OBJECT_TYPE
------------------SYNONYM
VIEW
FUNCTION
JAVA CLASS
JAVA SOURCE
INDEXTYPE
CLUSTER
TYPE
RESOURCE PLAN
JOB
EVALUATION CONTEXT
44 rows selected.
Source Database:
PRODDB:
--------SQL> select count(*) from dba_objects
where owner='REFRESH_SCHEMA';
COUNT(*)
---------132
SQL> select count(*) from dba_tables
where owner='REFRESH_SCHEMA';
COUNT(*)
---------34
SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
234
COUNT(*)
---------62
SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN
TARGET DATABASE:
TESTDB:
------------SQL> select count(*) from dba_objects
where owner='REFRESH_SCHE'; 2
COUNT(*)
---------131
SQL> select count(*) from dba_tables
where owner='APEX4_DEV'; 2
COUNT(*)
---------34
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE
IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
COUNT(*)
---------62
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME
Check if these tablespaces exists,if so,check size and add if required and send me the out put of below
query in orcl.
SOURCE DATABASE:STARQA_GDC
-------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %
DESTINATION DATABASE:ORCL
----------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in
DESTINATION DATABASE(ORCL) carefully and verify the SPACE needed for the refresh,If the space is
insufficient in particular TABLESPACE,Increase the size of the Tablespace for import operation to be
successfull and hence Database refresh to be successfull.
Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sqlat SQL prompt
SQL>@Dropstarusers.sql
Step 3:Create the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/createstarusers.sql or Take the script by
going to the TOAD tool,create the users with required privileges and grants.
We need to create the 5 users STARTXN,STARREP,STARREPAPP,STARMIG & STARTXNAPP whose
Data needs to be refresh.
Step 4:Copy the export dump files to the destination location by using WINSCP utility.
Step 5:Create the directory for the Datapump import:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
(or)
grant read,write on directory imp_dp2_Jun to system;
Since,import is done with system user.
Step 6: Importing the dumpfile in desktop:
The import scripts are:
Step of importing dumpfile for 5 users:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
Datapump import done for the dump file got from GDC team after unziping it .
impdp system/database directory=IMP_DP2_JUN dumpfile=STARMIG_210610.dmp
logfile=STARMIG_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREP_210610.dmp
logfile=STARREP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREPAPP_210610.dmp
logfile=STARREPAPP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXN_210610.dmp
logfile=STARTXN_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXNAPP_210610.dmp
logfile=STARTXNAPP_210610.log
Step 7:Verify the log files for each import
Step 8:Exporting in target Box:
starmig:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starmig_23jun10.dmp'
log=/work/Rafi/exp_orcl_starmig_23jun10.log owner=starmig statistics=none
vi imp_starmig.sh =>Add the exp script here
To run in background without interupt:
nohup sh imp_starmig.sh > a.out &
startxn:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxn_23jun10.dmp'
log=/work/Rafi/exp_orcl_startxn_23jun10.log owner=startxn statistics=none
nohup sh imp_startxn.sh > b.out &
startxnapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp'
log=/work/Rafi/exp_orcl_startxnapp_23jun10.log owner=startxnapp statistics=none
nohup sh imp_startxnapp.sh > c.out &
starrep:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrep_25jun10.dmp'
log=/work/Rafi/exp_orcl_starrep_25jun10.log owner=starrep statistics=none
nohup sh imp_starrep25.sh > r.out &
starrepapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp'
log=/work/Rafi/exp_orcl_starrepapp_23jun10.log owner=starrepapp statistics=none
nohup sh imp_starrepapp.sh > e.out &
Step 9: Importing in 36 box:
starmig:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starmig_23jun10.dmp'
log=imp_starmig_stardev23jun10.log fromuser=starmig touser=starmig
nohup sh imp_starmigDEV.sh > f.out &
startxn:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxn_23jun10.dmp'
log=imp_startxn_stardev23jun10.log fromuser=startxn touser=startxn
nohup sh imp_startxnDEV.sh > g.out &
startxnapp:
------mentioned tablespaces:
Source:STARQA_GDC Target:STARTST
STAR02D,STAR02I,STAR01D & STAR01I
SELECT F.TABLESPACE_NAME,TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999')
"USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME
Check if these tablespaces exists,if so,check size and add if required and send me the out put of below
query in STARTST.
SOURCE DATABASE:STARQA_GDC
-------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %
DESTINATION DATABASE:STARTST
----------------------TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- -----STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in TARGET
DATABASE(STARTST) carefully and verify the SPACE needed for the refresh,If the space is insufficient in
particular TABLESPACE,Increase the size of the Tablespace for import operation to be successfull and
hence Database refresh to be successfull.
Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sql at SQL prompt
SQL>@Dropstarusers.sql
Solution:
A. How to resize and/or add redo logs.
1. Review information on existing redo logs.
sql> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#
2. Add new groups
sql> ALTER DATABASE ADD LOGFILE group 4 ('/log01A.dbf', '/log01B.dbf ') SIZE 512M;
sql> ALTER DATABASE ADD LOGFILE group 5 ('/log02A.dbf', '/log02B.dbf ') SIZE 512M;
sql> ALTER DATABASE ADD LOGFILE group 6 ('/log03A.dbf', '/log03B.dbf ') SIZE 512M;
b. You can drop an online redo log group only if it is INACTIVE. If you need to drop the current
group, first force a log switch to occur.
By using this command :
SQL> ALTER SYSTEM SWITCH LOGFILE;
c. Make sure an online redo log group is archived (if archiving is enabled) before dropping it.
This can be determined by:
GROUP# ARC STATUS
--------- -----------------1
YES ACTIVE
2
NO CURRENT
3
YES INACTIVE
4
YES UNUSED
5
YES UNUSED
6
YES UNUSED
d. Check that the group is inactive and archived before dropping it .
SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
e. After dropping an online redo log group, make sure that the drop completed successfully,
and then use the appropriate operating system command to delete the dropped online redo
log files
OS File Permission
The chmod command is used to alter file permissions after the file has been created.
chmod -R 777 abc
-R is to recursively change the permission.All the files in the directory have same permission.
Owner
Group
Others
Permission
7 (u+rwx)
7 (g+rwx)
7 (o+rwx)
6 (u+wx)
6 (g+wx)
6 (o+wx)
write + execute
5 (u+Rx)
5 (g+Rx)
5 (o+Rx)
read + execute
4 (u+r)
4 (g+r)
4 (o+r)
read only
2 (u+w)
2 (g+w)
2 (o+w)
write only
1 (u+x)
1 (g+x)
1 (o+x)
execute only