Вы находитесь на странице: 1из 6

Much overlooked : UAT and DEV DB standard

Much over looked thing while building UAT and DEV databases is it does not model
Production database. I'v seen people tend to ignoring unless they are pushed hard to
keep UAT and DEV databases as much as close to Production database design, Data
distribution and Hawrdware/Software environment.

for e.g. There was an Java Application running on Tomcat - Apache on Solaris 10 on on 64
bit SPARC machines.

1. UAT database was refreshed from export dump instead from Physical Hot/Cold/RMAN
backup of Prod. Extent size on UAT and Prod was different. Prod had large fragmentation
in some tables,indexes. While UAT did not have as it was refreshed from export dump

2. Statistics were gathered in different way than Production.

3. All database were placed on single Disk array on UAT database. Production has three
mirrored copies of Redo logs while UAT has no mirrored redo log.

4. Application was using Connection pooling implemented through java developed code
in application only (instead using Oracle's default connection pooling or connection
pooling of Weblogic etc)

5. UAT middle tier was using different JDBC driver than Production.

6. Application was facing reaching open_cursors limit in UAR because connections from
pool were not closed and some result sets were still opened. Increasing the number of
UAT solved it, but production did not have this issue as production servers had more
connections in pool.

Interview questions on Oracle 10g RAC


Interview questions on Oracle 10g RAC to hackle your mind for good.

1. what is node eviction(gud to start with simple question)


2. what is split brain
3. who do your client connect to VIP or public IP? or is it your choice!
4. how can you change VIP
5. can private IP be changed.
6. what does root.sh do when you install 10g RAC.
7. how is virtual IP configured,what is done behind the VIP configuration assistant.
--some simple questions
8. what is client balancing and server side balancing.
9. how does listener handles requests in RAC
10. Have you ever set TAF. If yes ,expalin how does fail over happens
11. how can cache fusion improve or degreade performance.
12. Have you ever faced any performance issue due RAC
13. what is the background process for cache fusion. Does it have anything to do with log
writer process.
14. will you increase parallelism if you have RAC, to gain inter instance parallelism.
what are the considerations to decide.
15. what is single point of failure in RAC
16. how do you backup voting disk and how do you recover.
17. what information is stored in OCR, what if you loose it. How can you recover it.
18. how many voting disks and OCRs you can have. Why voting disks can be in odd
numbers only.
19. A query running fast on one node is very slow on other node. All the nodes have same
configurations. What could be the reasons.
20. Does RMAN behave differently in RAC21. Can archive logs be placed on ASM disk.
what about on RAW.
22. have you ever used OCFS, can you place OCR and voting disks on OCFS.

Don't assume!

We can not assume in same way we don't believe in rule of thumbs!

consider query:

select c1, sum(c2)


from t1
group by c1

this query returns result set in sorted order of c1 but it changes in 10g R2 as 10g used HASH GROUP BY
Operation to implement grouping,rather than using SORT GROUP BY as it would do in earlier versions. So
Here if sorting is desired there must be explicit order by query.

Similary There can be some join queries in which users might be getting sorted result set , but they can not
rely on it always, may be if execution plan changes it can not sort the result set, so if sorting is required,
developers need explicit specify order by clause in query.

I remember a case in which a junior developer wrote a query to dump the table data to asciii csv file, Here
was obviously clear columns data in csv need in same order as in table. But as Developer came to know about
view user_tab_columns I told him, he used query on this view to estimate the maximum record length of
table in csv file(rather than manully summing the all columns widths of table) what he could have done
alternate way is set large linesize along with trimspool on, but he wamted to cut short work of typing select
c1||','|| c2||','||c3||','||... from table. So he generated this select query from user_tab_columns. But he
assumed columns orders would be same as in table name. Result was wrong columns order in csv file. So
please don't assume - it was view- so not guaranteed.

I/O how much you have - mind it!

Rules of thumb are never advised by me but some can be taken as part of check list one by one while
tunning I/O

I/O how much you have - mind it! so rule 1 is minimize I/O

rule 2: maximize cached I/O

rule 3: minimize I/O contention.

how to cut I/O:

1. cut unnecessary fetch. Be restrictive about columns in selected list. Make sure all columns fetched in
explicit/implicit cursor are used some where in code.

2. check usefullness of indexed columns. They may be slowing DMLs heavily and not yielding any query
performance gain. So identify and drop such indexes.

3. avoid triggers which performs lot of transactions and auditing from inside - these may actually be
slowing DMLs especially when dmls in bulks are issued.

4. check all tables/indexes have appropriate values ser for PCTFREE and PCTUSED . PCTFREE has default
10% so you may be wasting not only 10% extra disk/cache memory but also causing more I/O for
objecting not undergoing future updates.

5. If CPU resources are available some tables can be compressed. this will not ony minimize the I/O at
the expense of CPU but also meets the objective "maximize cache" - how ? Because table now needs less
buffers, you have more free buffers where other objects can be assigned. This is very useful in case
when there is no shortage of CPU but scarcity of memory is.

6. If using materialized views for replication or reporting then, try their refresh possible by FAST
method.

7. Optimize query execution plan.


.
.
.
Maximize cached I/O

1. explore if you need configure KEEP and RECYCEL pools in your database for frequently accessed(small
in size) and least accessed(bigger) tables and the set and size them appropriately. Assign the related
objects to these pools.

2. set the buffer cache appropriately.

3. if using bigger SGA > 16GB, in linux use huge pages memory.

4. set the PGA_AGGREGATE_TARGET appropriately.


.
.
.
Minimize I/O contention:
Balance the I/O across multiple disks array if possible.
take care of all I/O source redo logs, undo tablespace, temporay tablespaces , index tablespaces and
DATA tablespaces and archive log too if DB is running in archivelog mode. SPREAD these across Disks,
depending on their concurrent usage. you can check statspack/AWR report for I/O usage on
tablespace/datafile wise.

tracking DDL,while DB is in noarchivelog mode


You want to tack DDL and your DB is in noarchivelog , attempted as below,[no way but to use catalog in flat
file]
SQL> conn / as as sysdba
Connected.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
Session altered.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -


> STARTTIME => '13-apr-2009 12:42:00', -
> ENDTIME => '15-apr-2009 11:55:00', -
> OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
> DBMS_LOGMNR.CONTINUOUS_MINE);
BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => '13-apr-2009 12:42:00', ENDTIME => '15-apr-2009
11:55:00', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
END;

*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream

And again below you get error when utry use redo logs for building dict
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
BEGIN DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;
*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream

if DB bouce is affordable go ahead as follow:


SQL> alter system set utl_file_dir='c:\ora';
SQL> shutdown immediate
SQL> startup

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', 'c:\dict', OPTIONS =>


DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);


BEGIN DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING); END;
*
ERROR at line 1:
ORA-01292: no log file has been specified for the current LogMiner sessionORA-06512: at
"SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1

seems in much hurry!!

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO01.LOG',


OPTIONS => DBMS_LOGMNR.NEW);

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO02.LOG',


OPTIONS => DBMS_LOGMNR.NEW);

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO03.LOG',


OPTIONS => DBMS_LOGMNR.NEW);

[file 3 was current log group file]

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);

SQL> select count(*) from v$logmnr_contents;


COUNT(*)
----------
48050

SQL> create table t1 as select * from v$logmnr_contents;


Table created.

SQL> create table t2(c1 number);


Table created.

SQL> truncate table t2;


Table truncated.

SQL> create table t1_log as select * from v$logmnr_contents;


Table created.

query from, another session SELECT t.session_info,t.sql_redo, t.* FROM t1_log t WHERE UPPER(sql_redo) LIKE
UPPER('%truncate%') OR operation LIKE 'DDL'
gives: DDL operations above in SQL>prompt are also tracked

Вам также может понравиться