Migrate Oracle RAC to a new datacenter

Posted on Posted in Data & Business Intelligence

Our Oracle cluster, which includes 2 nodes with Oracle RAC 11.2.0.2 on Oracle Linux 5 and acts as the production application database, needed to be migrated to a new datacenter.

From cost reduction purposes, it was decided to migrate our datacenter to a new datacenter. Meaning we will have to migrate the Oracle cluster to the new datacenter, with minimum downtime.

You must have been thinking, what’s the problem? there are several tools for cases like this, but we wanted to do it with minimum cost (we already purchased 2 new hosts for the new datacenter), and oh – Oracle Linux 5 isn’t  supported anymore on UCS (Cisco Unified Computing System) and the minimum Oracle version for Oracle Linux 6 is 11.2.0.4!

So this was our challenge, to migrate the Oracle cluster to its new home and upgrading it.

 

 

 

 

 

 

 

 

 

 

Preparation Steps

Below is a list of preparation steps, that we did several days before the cut-over.

Install Oracle clusterware and database binaries

Install Oracle clusterware and database binaries on the new hosts (in the new datacenter)

Execute the following commands on the new hosts:

Add to /etc/sysctl.conf:

Restart nework service

Configure ntp:

Restart ntpd

Set /etc/hosts on both nodes:

Install smartmontools.x86_64, which is required for cvuqdisk

Unzip grid zip file and install cvuqdisk

Using root change oracle home directory owner

Add to .bash_profile umask

Add these to shell configuration to /etc/secutiry/limits.conf

Download ASM rpms from http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html

Install ASM rpms:

Install kmod-oracleasm-2.0.8-4.el6_6.x86_64.rpm

Partition the disks

Reboot the nodes

Configure oracleasm

Create disks (on one server)

On the rest of the servers run

Make sure all servers see the disks

Install these for X

Make sure X is working using xclock

Run GI installation

Choose advanced installation and follow the steps

Run Oracle Database installation and follow the steps

Clear the data

Stop the current instances running in NY and clear the data

On instance #2

On instance #1

on user oracle run

Make sure all data files and control files were removed using

The expected output should be all disk groups (DATA, FRA, REDO except CRS) are almost completely empty and free. Check Total_MB  column against Free_MB.

Set DBID

Use the DBID from the current production database and set this DBID on RMAN in the new hosts (DBID should be identical for restoration):

On instance #1

Restore control file

Get the latest backup of the control file name. Login to backup-server and the get the most recent file (we are using netbackup, this doesn’t matter as long as you can fetch the recent control file)

Restore the control file

At ASM get the restored control file name

Set the restored control file to the control files parameter and disable CLUSTER_DATABASE parameter

Restore DB

Shut down the DB and start it in mount

Create file restore.rman on /tmp and write this block there

Restore DB

Update new DB with archives

now we have a copy of our production db and we want to keep it updated with latest changes, until the day of the cut-over.

We can’t open the database, since the new DB is 11.2.0.4 and the current production is 11.2.0.2, so once we will open the DB we won’t be able to run archive logs from 11.2.0.2 on 11.2.0.4

Enable supplemental log

On productiondb in current datacenter, enable supplemental log:

Make sure it’s turned on

After being enabled perform switch on all the nodes

Cut-Over Day
On productiondb at old datacenter, perform log switch on all the nodes

Backup archivelogs and run catalog_archive.sh and db_recover.sh for the last time
Save latest applied SCN

Get the SCN of the last archive log applied and keep it (from the new hosts)

At the current datacenter (before the migration)convert the SCN  to time and keep it too

Open the new database

On the new database, disable tracking log

Open the new DB

Recreate temporary tablespace

Recreate the temp table space on the new hosts

Upgrade the new DB

Upgrade the new DB to 11.2.0.4

Watch the alert log to make sure the upgrade is OK

Start the DB

Set cluster configuration

Set back to cluster configuration, shut down the DB, and start both instances:

Startup one node only

On the second node:

Fill in missing data – using logminer

Since we applied the last archive log and opened the database, a lot has happened on the old database, and now it’s time to roll these changes on the new database. Reminder: since our source DB was 11.2.0.2 and the new one is 11.2.0.4, once we opened the DB we can’t apply the archive logs, so we will use logminer. In our case, the upgrade phase of the database took ~30 minutes.

On current DB perform log switch on both nodes

Prepare LOG MINER dictionary file

Get the relevant archive logs for the mining. The time frame is from the traffic redirection until now

Copy the output to text file

Rename the output in the text file so the first archive log will be with DBMS_LOGMNR.NEW flag and the rest will be with DBMS_LOGMNR.ADDFILE (example)

Run the script from the previous step in old DB on sqlplus
Start LOG MINER:

Make sure the process is valid

Create the SQL file with relevant statements

Enable block change tracking in the new hosts

Enable the tracking log on the new hosts

Make sure its enabled

Add to the beginning of /tmp/production_gap.sql

Add commit command in the end of /tmp/production_gap.sql

Copy the /tmp/production_gap.sql to the new DB and run it

and that’s it, now the new cluster is up-to-date and we just need to shut sown log miner on old DB.

In old DB, end the LOG MINER session

Leave a Reply

Your email address will not be published. Required fields are marked *