Вы находитесь на странице: 1из 74

Upgrade to Oracle Database 18c

Live and
Uncensored!

Mike Dietrich Kamran Agayev


Master Product Manager Oracle DBA Team Leader
Database Upgrades & Migrations OCM - Oracle ACE Director
Oracle Corporation Azercell

Copyright © 2018 Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for
information purposes only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied upon
in making purchasing decisions. The development, release, and timing of any features or
functionality described for Oracle’s products remains at the sole discretion of Oracle.

Copyright © 2018 Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 2
a n y
G e rm

Mike Dietrich https://MikeDietrichDE.com


Master Product Manager
Oracle Database Upgrades and Migrations MikeDietrichDE

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 3
Slides download and other resources
• https://MikeDietrichDE.com/slides

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 4
Hands-On Lab 18c
• https://MikeDietrichDE.com/hands-on-lab/

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 5
Upgrade to Oracle Database 18c - Live and Uncensored

1 Upgrade
2 Exadata Migration – Azercell Experience
3 Wrap Up

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 6
Lifetime Support Commitments and Plans

2009

2010

2011

2012

2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

2023

2024

2025

2026

2027
Oracle 11.2 EXTENDED

Oracle 12.1 EXTENDED

12.2.0.1

Oracle 18

Oracle 19 EXTENDED

Premier Support Waived Extended Support Paid Extended Support

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 7
Clarification: Support for Annual Releases
• Annual releases get a minimum of 2 years patching after the succeeding
release is available on all enterprise (non-Engineered Systems) platforms
• Similar to what patch sets received under the previous release model
2016

2017

2018

2019

2020

2021

2022

2023

2024

2025

2026

2027
12.2.0.1
≧ 2 years

Oracle 18

≧ 2 years

Oracle 19 EXTENDED

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 8
1 2 18 19 20
Change is here already
No first/second-releases anymore

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 9
Direct Upgrade/Downgrade to/from Oracle Database 18c

11.2.0.3

11.2.0.4

12.1.0.1 18c
19c

12.1.0.2

12.2.0.1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle 18c - Real World Customer Cases 10
New Preupgrade Tool
• preupgrade.jar
java -jar preupgrade.jar TEXT TERMINAL

• Checks source environment


• Detailed recommendations
• Fixup scripts
• Rerunnable and dynamic
• Always download from:
MOS Note: 884522.1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle 18c - Real World Customer Cases 11
Parallel Phase #:19 [UPGR] Files:33 Time: 72s
Restart Phase #:20 [UPGR] Files:1 Time: 0s
Serial Phase #:21 [UPGR] Files:3 Time: 19s
Restart Phase #:22 [UPGR] Files:1 Time: 0s

catctl.pl - Parallel Upgrade


Parallel Phase #:23 [UPGR] Files:24 Time: 171s
Restart Phase #:24 [UPGR] Files:1 Time: 0s
[..]
Restart Phase #:30 [UPGR] Files:1 Time: 0s
*************** Catproc CDB Views **************
[..]
***************** Catproc PLBs *****************

• catctl.pl
[..]
*************** Catproc DataPump ***************
[..]
****************** Catproc SQL *****************
perl catctl.pl -l /logs catupgrd.sql [..]
************* Final Catproc scripts ************

• Non-CDBs and CDB$ROOT: Serial


Restart Phase #:49
**************
Phase #:48 [UPGR] Files:1
[UPGR] Files:1
Final RDBMS scripts
Time: 8s
Time: 0s
*************

– 4 (default) - maximum: 8 workers Serial


************
Phase #:50 [UPGR] Files:1
Upgrade Component Start
Time: 32s
***********
Serial Phase #:51 [UPGR] Files:1 Time: 1s

• PDBs:
Restart Phase #:52 [UPGR] Files:1 Time: 0s
**************** Upgrading Java ****************
Serial Phase #:53 [UPGR] Files:1 Time: 0s
Restart Phase #:54 [UPGR] Files:1 Time: 1s
– Limited only by computing power *****************
Serial Phase #:55
Upgrading XDK
[UPGR] Files:1
****************
Time: 0s
Restart Phase #:56 [UPGR] Files:1 Time: 0s
• Resumable: *********
Serial
Upgrading APS,OLS,DV,CONTEXT
Phase #:57 [UPGR] Files:1
*********
Time: 14s
***************** Upgrading XDB ****************
perl catctl.pl -R -l /logs catupgrd.sql Restart Phase #:58 [UPGR] Files:1 Time: 0s
Serial Phase #:60 [UPGR] Files:3 Time: 21s
Serial Phase #:61 [UPGR] Files:3 Time: 9s
Parallel Phase #:62 [UPGR] Files:9 Time: 4s
Parallel Phase #:63 [UPGR] Files:24 Time: 3s
Serial Phase #:64 [UPGR] Files:4 Time: 12s
[..]
Serial Phase #:70 [UPGR] Files:3 Time: 72s
Restart Phase #:71 [UPGR] Files:1 Time: 0s
********* Upgrading CATJAVA,OWM,MGW,RAC ********
Serial Phase #:72 [UPGR] Files:1 Time: 92s
****************
Copyright © 2018, Oracle and/or Upgrading
its affiliates. All rights reserved. | Upgrade ORDIM
Upgrade
to & Migrate
Oracle -***************
18cto Oracle
Real Database
World 18c - Live
Customer and Uncensored
Cases 12
[..]
***************** Upgrading SDO ****************
Wait a bit …

13
Parallel Phase #:19 [UPGR] Files:33 Time: 72s
Restart Phase #:20 [UPGR] Files:1 Time: 0s
Serial Phase #:21 [UPGR] Files:3 Time: 19s
Restart Phase #:22 [UPGR] Files:1 Time: 0s

dbupgrade - Parallel Upgrade


Parallel Phase #:23 [UPGR] Files:24 Time: 171s
Restart Phase #:24 [UPGR] Files:1 Time: 0s
[..]
Restart Phase #:30 [UPGR] Files:1 Time: 0s
*************** Catproc CDB Views **************
[..]
***************** Catproc PLBs *****************

• dbupgrade -l /tmp/logs
[..]
*************** Catproc DataPump ***************
[..]
****************** Catproc SQL *****************

• Resumable: [..]
*************
Serial Phase #:48
Final Catproc scripts
[UPGR] Files:1
************
Time: 8s

dbupgrade -R -l /tmp/logs Restart Phase #:49


**************
[UPGR] Files:1
Final RDBMS scripts
Time: 0s
*************
Serial Phase #:50 [UPGR] Files:1 Time: 32s
************ Upgrade Component Start ***********
Serial Phase #:51 [UPGR] Files:1 Time: 1s
Restart Phase #:52 [UPGR] Files:1 Time: 0s
**************** Upgrading Java ****************
Serial Phase #:53 [UPGR] Files:1 Time: 0s
Restart Phase #:54 [UPGR] Files:1 Time: 1s
***************** Upgrading XDK ****************
Serial Phase #:55 [UPGR] Files:1 Time: 0s
Restart Phase #:56 [UPGR] Files:1 Time: 0s
********* Upgrading APS,OLS,DV,CONTEXT *********
Serial Phase #:57 [UPGR] Files:1 Time: 14s
***************** Upgrading XDB ****************
Restart Phase #:58 [UPGR] Files:1 Time: 0s
Serial Phase #:60 [UPGR] Files:3 Time: 21s
Serial Phase #:61 [UPGR] Files:3 Time: 9s
Parallel Phase #:62 [UPGR] Files:9 Time: 4s
Parallel Phase #:63 [UPGR] Files:24 Time: 3s
Serial Phase #:64 [UPGR] Files:4 Time: 12s
[..]
Serial Phase #:70 [UPGR] Files:3 Time: 72s
Restart Phase #:71 [UPGR] Files:1 Time: 0s
********* Upgrading CATJAVA,OWM,MGW,RAC ********
Serial Phase #:72 [UPGR] Files:1 Time: 92s
****************
Copyright © 2018, Oracle and/or Upgrading
its affiliates. All rights reserved. | Upgrade ORDIM
Upgrade
to & Migrate
Oracle -***************
18cto Oracle
Real Database
World 18c - Live
Customer and Uncensored
Cases 14
[..]
***************** Upgrading SDO ****************
“The upgrades to Oracle Database 18c.0.1 at
SimCorp run very smooth. Performance and
stability of the upgrade program is very good.
Our upgrade project included close to 100
databases that had to be upgraded within a two
month period. At the end of the project, all
databases in scope had been upgraded and no
major issues occurred.”
DANIEL OVERBY HANSEN
Lead Developer
TECH Development Omega DK
SimCorp A/S

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle 18c - Real World Customer Cases 15
Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 16
Upgrade to Oracle Database 18c
§ Time zone adjustment - why?

Copyright © 2018 Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 17
Upgrade to Oracle Database 18c
§ Time zone adjustment - why?
– Adjust time zone post upgrade
Oracle Database Release Default TZ Version
11.2.0.2 - 11.2.0.4 DST V14
12.1.0.1, 12.1.0.2 DST V18
12.2.0.1 DST V26
18c DST V31

– Newest time zone patch:


§ MOS Note:412160.1
– New scripts in ?/rdbms/admin:
§ utltz_countstar.sql
§ utltz_upg_check.sql
§ utltz_upg_apply.sql

Copyright © 2018 Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 18
Upgrade to Oracle Database 18c - Live and Uncensored

1 Upgrade
2 Exadata Migration – Azercell Experience
3 Wrap Up

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 19
<Insert Picture Here>

Exadata Migration – Azercell Experience

Kamran Aghayev A.
Oracle Certified Master, ACE Director
About me

• Database Team Leader at AzerCell Telecom


• Oracle Certified Master, ACE Director
• Author of the “RMAN Backup and Recovery”
• Author of the “OCM 11g Study Guide”
• Blogger at http://www.kamranagayev.com
• President of Azerbaijan Oracle User Group (AzerOUG)
AzerCell

Azercell was founded in January 1996 and started its commercial activity as the
first GSM operator in Azerbaijan

With the largest market share and more than 4,5 million customers, Azercell is
the leading mobile operator in Azerbaijan.

Azercell today covers 99,8% of the country’s population and 80% of the territory
of Azerbaijan (with the exception of 20% of the occupied territories).
The Requisite Room Survey

How many of you migrated mission critical 24x7 database to


Exadata?

What was the size of the migrated databases?


Few Gb, or Tb?

How much downtime did you have? Few hours? Minutes? Seconds
or no downtime?
Installing and configuring Exadata machine

Production (RAC) Standby + Test

STANDBY TEST(DEV)
PROD PROD

Storage cells ZFS backup storage Storage cells


Using RAT (Real Application Testing) tool to capture the load on production environment and
replay it on Exadata

Real Application Testing (RAT) tool is used to capture the production workload and replay it on the new environment

When you

Upgrade a database, apply new patch, change a critical parameter, converting database to RAC

OS, storage, network changes, hardware migrations and etc.

Database Replay consists of four main steps:

Workload capture -> Workload processing -> Workload replay -> Analysis and Reporting
Using RAT (Real Application Testing) tool to capture the load on production environment and
replay it on Exadata

TEST TEST

PROD PROD Process -> Replay -> Analysis

Capture (ZFS)
Using RAT (Real Application Testing) tool to capture the load on production environment and
replay it on Exadata
Configuring capture process

- Create a directory object (shared on ZFS)

- Create a capture filter. Query DBA_WORKLOAD_FILTERS to get list of filters


exec dbms_workload_capture.add_filter(
'sample_cap_filter',
'<INSTANCE_NUMBER/USER/MODULE/ACTION/PROGRAM/SERVICE>',
'[VALUE]');

- Start the capture. Check the status of the capture from DBA_WORKLOAD_CAPTURES view

exec dbms_workload_capture.start_capture(name=>'myrat3001', dir=>'MYRAT1', duration=>3600, default_action=>'INCLUDE');

- Finish the capture

exec dbms_workload_capture.finish_capture;

- Export AWR snapshots automatically taken at the start and end of the capture. So you could use it later to analyze the system
declare
capture_id number;
begin
select max(id) into capture_id from dba_workload_captures where status = 'COMPLETED';
dbms_workload_capture.export_awr(capture_id);
end;
Using RAT (Real Application Testing) tool to capture the load on production environment and
replay it on Exadata

Configuring replay process


- Process the capture: create necessary meta-data to replay

exec dbms_workload_replay.process_capture('CAPTURE_DIR');

- Initialize replay

exec dbms_workload_replay.initialize_replay(replay_name =>'myrat3001', replay_dir=>'CAPTURE_DIR');

- Remap the connection (if required)

exec dbms_workload_replay.remap_connection(<connection_id>, '<replay_connection>');

exec dbms_workload_replay.remap_connection(1,

'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sample_host)(PORT=1234))(CONNECT_DATA=(SERVICE_NAME=sample_sid)))');

- Prepare replay: put DB state into REPLAY mode

exec dbms_workload_replay.prepare_replay(synchronization => TRUE, connect_time_scale => 100, think_time_scale => 100,

think_time_auto_correct => FALSE);


Using RAT (Real Application Testing) tool to capture the load on production environment and
replay it on Exadata

- Start the replay client which is responsible to parse and send the workload to the server

$ORACLE_HOME/bin/wrc <username>/<password> mode=calibrate replaydir=/zfs_testbackup1/VSME/rattests

- Start and Cancel the replay

exec dbms_workload_replay.start_replay;

exec dbms_workload_replay.cancel_replay;

- Import AWR of the replay if you want to do reporting on a different database

select dbms_workload_replay.import_awr(<replay_id>, 'SYSTEM') from dual;

- Create an html file of comparison of Capture and Replay AWR reports


spool diff.report.html
select * from table(dbms_workload_repository.awr_diff_report_html(588260762,1,28685,28686,4284103751,1,28665,28666));
Migrating mission critical 24x7 databases to Exadata
Define the best migration plan and migrate database to Exadata (for the testing purposes)

If there’s a version change, make sure to read upgrade metalink notes

• Master Note For Oracle Database Upgrades and Migrations (Doc ID 1152016.1)

• Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 12c Release 2 (12.2) (Doc ID 2173141.1)

• How to Upgrade to Oracle Database 12c release1 (12.1.0) and Known Issues(Doc ID 2085705.1)

• Oracle Database Upgrade Known issues - 12.2 (Doc ID 2243613.1)

• How to Upgrade to/Downgrade from Grid Infrastructure 12.1 and Known Issues (Doc ID 1579762.1)

• How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)

• Patches to apply before upgrading Oracle GI and DB to 12.2.0.1 (Doc ID 2180188.1)

• Top 11gR2 and 12c Grid Infrastructure Upgrade Issues (Doc ID 1366558.1)
Define the best migration plan and migrate database to Exadata (for the testing purposes)

Actions to be performed before upgrade:

• Collect statistics
• exec dbms_stats.gather_dictionary_stats;
• Disable all batch and cron jobs. Make sure to move all cron jobs to Exadata
• Purge Recycle Bin
• Move the audit table to different tablespace, backup it and restore it after migration
• Make sure there’s no outstanding distributed transactions before upgrade – DBA_2PC_PENDING;
• Check all non-default parameters in the production environment.
• Make sure to create a database with same Characterset as in production environment
• Create more redo log groups with big size if upgrade is also involved. Remove unnecessary groups after upgrade
• If database will be upgraded, check new features to use and un-supported features to avoid
Configuring High Availability in Exadata Machine

What is downtime?

Use uptime.is for more detailed distribution in terms of hours, minutes etc.

99% 99.9% 99.99% 99.999%


Yearly: 3d 15h 39m 29.5s Yearly: 8h 45m 57.0s Yearly: 52m 35.7s Yearly: 5m 15.6s
Define the best migration plan and migrate database to Exadata (for the testing purposes)

• Using Data Pump (Schema, Full, Tablespace)

• Transportable Tablespace migration

• Transportable tablespace with incremental backup

• Duplicate database

• Create a standby database and perform failover

• Zero downtime migration with Golden Gate

• Single instance to RAC


Define the best migration plan and migrate database to Exadata (for the testing purposes)

If the database size is small, you can get a downtime, or you


can move only specific schemas, then:
• Using Data Pump (Schema, Full, Tablespace)

• Transportable Tablespace migration - Schema level Export/Import


- Full Export/Import
• Transportable tablespace with incremental backup

• Duplicate database 12c New Feature – Full Transportable Export/Import


• Create a standby database and perform failover
expdp system/oracle transportable=always full=y
• Zero downtime migration with Golden Gate Datafiles required for transportable tablespace APP_DATA:
/u01/app/oracle/oradata/prod/app_data01.dbf
• Single instance to RAC
impdp system/oracle transport_datafiles=
'/u01/app/oracle/oradata/prodnew/datafiles/app_data
01.dbf'
Define the best migration plan and migrate database to Exadata (for the testing purposes)

If the database size is big, but you can get a downtime, or


you want to move only specific tablespaces
• Using Data Pump (Schema, Full, Tablespace)

• Transportable Tablespace migration - Check if tablespace is self-contained:


exec dbms_tts.transport_set_check('APPDATA',TRUE);

• Transportable tablespace with incremental backup


- Put the tablespace in READ ONLY mode:
ALTER TABLESPACE appdata READ ONLY;
• Duplicate database
- Generate Metadata export of Transportable Tablespace
• Create a standby database and perform failover expdp system/oracle TRANSPORT_TABLESPACES=APPDATA

• Zero downtime migration with Golden Gate - Convert datafile:


CONVERT DATAFILE TO PLATFORM="Target_Platform" FROM
• Single instance to RAC PLATFORM="Source_Platform"

- Copy datafiles and dump file and import it


impdp dumpfile=tbs_appdata_meta.dmp
TRANSPORT_DATAFILES='/data/proddb/appdata01.dbf'
Define the best migration plan and migrate database to Exadata (for the testing purposes)

• Using Data Pump (Schema, Full, Tablespace)


If the database size is big, you can’t have a downtime and
• Transportable Tablespace migration
able to move objects on tablespace bases. Transportable
• Transportable tablespace with incremental backup tablespace method is used by taking full backup of individual

• Duplicate database tablespaces and applying incremental backups on restored


datafiles, thus reduce the migration from days/hours to
• Create a standby database and perform failover
minutes
• Zero downtime migration with Golden Gate

• Single instance to RAC


Define the best migration plan and migrate database to Exadata (for the testing purposes)

Oracle Exadata
HP-UX IA (64-bit)

Full Incremental level 0 backup


SYSTEM
SYSAUX
Transfer it to Exadata machine and /home/oracle/
UNDO restore using RMAN tbs_migration01.dbf
TEMP
TBS_MIGRATION

Incremental level 1 backup

Transfer it to Exadata machine and


recover using RMAN
Define the best migration plan and migrate database to Exadata (for the testing purposes)

Oracle Exadata
HP-UX IA (64-bit)
Put the tablespace into READ ONLY
mode
SYSTEM
SYSAUX Take last Incremental level 1 backup
/home/oracle/
UNDO tbs_migration01.dbf
TEMP Export metadata of tablespace using
Data Pump
TBS_MIGRATION

Transfer backup and dump file to the


Exadata machine, recover the backup and
import the dump file

11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
Define the best migration plan and migrate database to Exadata (for the testing purposes)

If the database size is big, you can’t have a downtime and


able to move objects on tablespace bases

BACKUP incremental LEVEL 0 format


'/home/oracle/backup/TBS_INC0_%U.bkp' TABLESPACE
• Using Data Pump (Schema, Full, Tablespace)
tbs_migrate;

• Transportable Tablespace migration BACKUP incremental LEVEL 1 format


'/home/oracle/backup/TBS_INC1_%U.bkp' TABLESPACE
• Transportable tablespace with incremental backup tbs_migrate;

• Duplicate database RESTORE FROM platform 'HP-UX IA (64-bit)' FOREIGN


DATAFILE 5 format
'/home/oracle/backup/datafile05.dbf' FROM backupset
• Create a standby database and perform failover '/home/oracle/backup/TBS_INC0_0jsj4i97_1_1.bkp'

• Zero downtime migration with Golden Gate RECOVER from platform 'HP-UX IA (64-bit)' foreign
datafilecopy '/home/oracle/backup/datafile05.dbf'
• Single instance to RAC FROM backupset
'/home/oracle/backup/TBS_INC1_0lsj4ias_1_1.bkp‘

impdp \"/ as sysdba\" DIRECTORY=ORACLE_BASE


DUMPFILE=exp.dmp
TRANSPORT_DATAFILES='/home/oracle/datafile05.dbf'
Define the best migration plan and migrate database to Exadata (for the testing purposes)

The process can be automated using rman_xttconvert tool

Download it from metalink: Doc ID 2005729.1


• Using Data Pump (Schema, Full, Tablespace)

• Transportable Tablespace migration

• Transportable tablespace with incremental backup Phase 1 - Initial Setup phase

• Duplicate database Phase 2 - Prepare phase

• Create a standby database and perform failover Phase 3 - Roll Forward phase

• Zero downtime migration with Golden Gate Phase 4 - Final Incremental Backup

• Single instance to RAC Phase 5 - Transport Phase: Import Metadata

Phase 6 - Validate the Transported Data

Phase 7 - Cleanup
Define the best migration plan and migrate database to Exadata (for the testing purposes)

If the OS platform is Linux, then you can Duplicate the


database.

- Perform Full backup


• Using Data Pump (Schema, Full, Tablespace) - Move and restore the backup to the Exadata
- Start the instance in NOMOUNT mode
• Transportable Tablespace migration - Restore the backup

• Transportable tablespace with incremental backup - Take an incremental backup, move and apply it to Exadata
- Stop listener, apply last archivelog file and open the
• Duplicate database database

• Create a standby database and perform failover OR

• Zero downtime migration with Golden Gate - Configure the network and duplicate the database:

• Single instance to RAC [oracle@proddb ~] rman target sys/pass@PROD


auxiliary /

RMAN> DUPLICATE TARGET DATABASE TO newprod;


Define the best migration plan and migrate database to Exadata (for the testing purposes)

If the OS platform is Linux, then you can create a Standby


database and perform switchover/failover

Steps are same with database duplication method. Database


• Using Data Pump (Schema, Full, Tablespace) should be placed in recovery mode and then switchover is
performed:
• Transportable Tablespace migration
SQL> ALTER DATABASE RECOVER MANAGED STANDBY
• Transportable tablespace with incremental backup DATABASE DISCONNECT;

• Duplicate database SQL> ALTER DATABASE RECOVER MANAGED STANDBY


DATABASE CANCEL;
• Create a standby database and perform failover
SQL> ALTER DATABASE RECOVER MANAGED STANDBY
• Zero downtime migration with Golden Gate DATABASE FINISH;

• Single instance to RAC SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO


PRIMARY WITH SESSION SHUTDOWN;

SQL> ALTER DATABASE OPEN;


Define the best migration plan and migrate database to Exadata (for the testing purposes)

If minimal (near-zero) downtime is required and the source


and target platforms are different, then the best way is to use
GoldenGate replication

• Using Data Pump (Schema, Full, Tablespace) Install GoldenGate on both source and target hosts
Configure Extract on the source database
• Transportable Tablespace migration
EXTRACT MAIN_EXTRACT
USERID ggadmin, PASSWORD manager
• Transportable tablespace with incremental backup
EXTTRAIL /zfs1/PRODDB/goldenarch/a
TABLE master.*;
• Duplicate database
add extract MAIN_EXTRACT
• Create a standby database and perform failover
Register Extract and get SCN number for initial load
• Zero downtime migration with Golden Gate register extract MAIN_EXTRACT database

• Single instance to RAC Perform initial LOAD


Configure PUMP to transfer trail files to Exadata

Import initial load to Exadata


Configure REPLICAT and start it
Define the best migration plan and migrate database to Exadata (for the testing purposes)

Converting single instance to RAC requires additions steps.

- First of all, take a backup of production environment and

• Using Data Pump (Schema, Full, Tablespace) restore it as a single node to the Exadata machine

• Transportable Tablespace migration - Recover the database on the Exadata

• Transportable tablespace with incremental backup - Modify initialization parameter file and add cluster related

• Duplicate database parameters

• Create a standby database and perform failover - Use srvctl add database command to register the

• Zero downtime migration with Golden Gate database in the cluster

• Single instance to RAC - Use srvctl add instance command to register

instances in the cluster


- Shutdown the single instance and start the RAC database
Exadata Migration – Azercell Experience

During the first phase, we migrated 7 mission critical databases from AIX, HP-UX and Linux OS to Exadata

DB size - 500Gb to 30Tb

Downtime – 2 minutes to 1 hour


Migrating mission critical 24x7 databases to Exadata

Production servers - 17 Test and Standby servers - 10

Migrated 7 databases from 27 servers to 2 Exadata machine


Migrating/Upgrading 10.2.4 (Linux) RAC database to 12.2 using RMAN

4 nodes 10.2.4 RAC


OS – Linux
DB size – 800 GB
Downtime – 25 min
Migrating/Upgrading 10.2.4 (Linux) RAC database to 12.2 using RMAN

Production db
Exadata machine
Backup Restore/Recover

ZFS backup storage

- Mount ZFS storage to old server and take RMAN backup to shared ZFS storage (1 hour – no downtime)
- Restore backup to 12.2 using RMAN as a single node Exadata. Apply archived log files and recover the db
(20 min – no downtime)

- Stop the listener on old production db, switch log file, move and apply last archived log file to the new database, run ALTER
DATABASE OPEN RESETLOGS UPGRADE command and upgrade db to 12.2 (20 min – downtime)

- Add second instance and convert the database to RAC (5 min – downtime)
Migrating 12.2 (Linux) RAC database to Exadata using Standby switchover

4 nodes 12.2 RAC


OS – Linux
DB size – 900 GB
Downtime – 8 minutes
Migrating 12.2 (Linux) RAC database to Exadata using Standby switchover
Old Production / Primary db Exadata / Standby db

Switchover

- Create Standby database using DUPLICATE ACTIVE DATABASE command and create a Standby database on Exadata
(1 hour – no downtime)

- Apply last archive log file, stop the listener on production database and perform switchover to Exadata (3 min - downtime)

- Add the second instance and convert the database to RAC db (5 min – downtime)
Migrating 12.2 (Linux) RAC database to Exadata using Standby switchover
Old Production / Standby db Exadata / Primary db

- Create Standby database using DUPLICATE ACTIVE DATABASE command and create a Standby database on Exadata
(1 hour – no downtime)

- Apply last archive log file, stop the listener on production database and perform switchover to Exadata (3 min - downtime)

- Add the second instance and convert the database to RAC db (5 min – downtime)
Migrating 12.2 (Linux) RAC database to Exadata using Standby switchover
Old Production / Primary db Exadata / Standby db

Switchover back

- Create Standby database using DUPLICATE ACTIVE DATABASE command and create a Standby database on Exadata
(1 hour – no downtime)

- Apply last archive log file, stop the listener on production database and perform switchover to Exadata (3 min - downtime)

- Add the second instance and convert the database to RAC db (5 min – downtime)
Migrating 11.2.4 (HP-UX) RAC database to Exadata using XTTS method

3 nodes 11.2.4 RAC


OS – HP-UX
DB size – 3 TB
Downtime – 15 minutes
Migrating 11.2.4 (HP-UX) RAC database to Exadata using XTTS method

Exadata
Old Production
RMAN Full Tablespace
backup

TBS_PAYMENT
TBS_PAYMENT copy/convert/restore TBS_APPLICAITON
TBS_APPLICAITON RMAN Incremental tablespace TBS_MAINTENANCE
TBS_MAINTENANCE backup TBS_ARCHIVE
TBS_ARCHIVE TBS_PRODUCT
TBS_PRODUCT TBS_SERVICES
TBS_SERVICES RMAN last incremental TBS_MICROSERVICES
ZFS backup storage
TBS_MICROSERVICES + metadata backup

Take LEVEL 0 incremental backup of all tablespaces to ZFS, copy datafiles to Exadata (ASM), convert from HP-UX to Linux, restore the backup and
plug all tablespaces to Exadata (tests performed on test database (14 hours – no downtime)

Take LEVEL 1 incremental backup of all tablespaces to ZFS, copy datafiles to Exadata (ASM), convert from HP-UX to Linux and recover the backup
(2 hours – no downtime) - repeat the same step as long as the last incremental backup apply takes less than 1 minute

Put tablespaces on read only mode, take last incremental LEVEL 0 backup of all tablespaces with along the metadata to ZFS, copy datafiles to
Exadata (ASM), convert from HP-UX to Linux, recover the backup and import the metadata (15 minutes – downtime)
Migrating 11.2.4 (HP-UX) database to Exadata using Golden Gate replication

Single node 11.2.4


OS – HP-UX
DB size – 30 TB
Downtime – 5 minutes
Migrating 11.2.4 (HP-UX) database to Exadata using Golden Gate replication

Old Production Exadata

Extract Pump Replicat


Network

Trail Trail

Initial load is performed using Data Pump or RMAN

Source: Install GoldenGate and configure EXTRACT and PUMP by providing list of schemas, objects and etc.

Perform Initial load from source database to the target using SCN number provided during REGISTER EXTRACT step
Initial load can be performed using Data Pump or RMAN

Target: Install Goldengate, configure REPLICAT and start it


Post Migration Checks

• During test migration, get count of objects per schema and compare it with production database

• Add the new database to the Cloud Control 13c and check the performance

• Get AWR report and investigate the performance of database and SQL commands. Compare it with the
previous AWR reports of old production database

• Make sure to get full database statistics

• Create a Standby database and configure database backup right after the migration
Post Migration Checks

To configure corruption protection on both Primary and Standby database, use the following parameters:

DB_BLOCK_CHECKSUM=FULL (checks for physical corruption)


If enabled, DBWR calculates checksum and stores it in the block header. Best practice is to set DB_BLOCK_CHECKSUM=FULL on both the
primary and standby databases, which typically incurs from 4% to 5% overhead on the system. Overhead for a setting of TYPICAL ranges from
1% to 2% for an OLTP workload. If setting FULL results to unacceptable performance degradation, then set it to TYPICAL on the Primary
database, and FULL on the Standby database

DB_BLOCK_CHECKING=FULL or MEDIUM (checks for logical corruption)’


This parameter specifies whether or not Oracle performs logical intra-block checking for database blocks. Block checking will check block
contents, including header and user data, when changes are made to the block and prevents in-memory corruptions from being written to disk.

DB_LOST_WRITE_PROTECT=TYPICAL
It enables or disables lost write detection. A data block lost write occurs when an I/O subsystem acknowledges the completion of the block write,
while in fact the write did not occur in the persistent storage.
Challenges during the migration
Challenges during the migration

Trail data got corrupted and were not applied to the target database by REPLICAT process of Golden Gate
because of incorrect ZFS mount options

2018-09-05T08:17:22.815+0400 ERROR OGG-02171 Oracle GoldenGate Capture for


Oracle, pump1.prm: Error reading LCR from data source. Status 509, data source
type 0.

Replicat abends with OGG-01028 Incompatible record (104) getting header and trail is not corrupted (Doc
ID 2076053.1)
Challenges during the migration

Solution :

- Remount the ZFS using correct mount opitons


- Use local storage to store the trail file
Challenges during the migration

Because of a BUG, GoldenGate didn’t replicate changes on some objects from the source to the
target database

Oracle GoldenGate: Compressed Tables Are Not Supported until OGG v11.2.X Integrated
Extract. (Doc ID 1266389.1)

Classic capture was used instead of Integrated capture because the compatibility parameter was set
to 10.2.4 on 11.2.4 database and changes on compressed tables were not propagated to the target
database

Lesson learned
Make sure that all objects are in synch in both databases on row level
Oracle GoldenGate Veridata 12c to compare and repair data on source and target database
Challenges during the migration

TimesTen stopped working after successful migration of 30TB database to Exadata because of invalid Database
CharacterSet value

Error: [TimesTen][TimesTen 11.2.2.4.1 ODBC Driver]Invalid value (WE8ISO8859P15) for


DatabaseCharacterSet connection attribute -- value must be the same as the current data store
value (WE8ISO8859P1)

Source: WE8ISO8859P15
Target: WE8ISO8859P1
Challenges during the migration

- Use csscan to convert characterset from WE8ISO8859P15 to WE8ISO8859P1 (Take long time)
- Rebuild the database from scratch
Questions?

Thanks for coming!!

http://www.kamranagayev.com
http://www.ocmguide.com
http://www.oraclevideotutorials.com
Upgrade to Oracle Database 18c - Live and Uncensored

1 Upgrade
2 Exadata Migration – Azercell Experience
3 Wrap Up

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 70
Our next OOW talks

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 71
Slides download and other resources
• https://MikeDietrichDE.com/slides

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 72
Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Upgrade to Oracle Database 18c - Live & Uncensored 73

Вам также может понравиться