RAC to RAC switchover for Standby v6

Follow

Read our docs RAC to RAC switchover for Standby v6


Problem Details

RAC to RAC switchover for Standby v6

Originally, Standby v6 was not designed to fully support RAC to RAC configuration after switchover. This guide will show you few simple steps to overcome this.

Steps Performed

1. Install newest Dbvisit v6 and setup new standby database first in RAC=> Single mode

Example environment:

1st RAC primary node: primarydb1 instance DBV1 of database DBV
2nd RAC primary node: primarydb2 instance DBV2 of database DBV 1st RAC standby node: standbydb1 <no instance running>
2nd RAC standby node: standbydb2 <no instance running>
Do the usual setup for DDC files and also create standby database. No special steps needed. Create standby datbase as single instance with no special parameters.

The only important note is, that variable ARCHDEST is set to same value in all DDC files and the location is created on each server. For general instructions, you can refer to:

https://dbvisit.atlassian.net/wiki/display/dbdc/Dbvisit+Standby+with+RAC

Remember: on standby site, there can be ONLY ONE instance in started in recovery mode, all other instances have to be dormant.

At the end of this step, we should end up with following:
1st RAC primary node: primarydb1 primary instance DBV1 of database DBV, DDC file DBV1
2nd RAC primary node: primarydb2 primary instance DBV2 of database DBV, DDC file DBV2 1st RAC standby node: standbydb1 standby instance DBV1 of database DBV, DDC file DBV1 (but this is just a copy of DDC file from 1st primary node)
2nd RAC standby node: standbydb2 <no instance running>

Make sure, that sending and applying archlogs works wihtout any problems - standby instance DBV1 should apply archlogs from both threads.

2. Prepare Standby environment to act as a RAC on 2nd standby RAC node

- Create directory for audit_file_dest

- Create init file in $ORACLE_HOME/dbs pointing to RAC spfile

3. Set RAC parameters on 1st standby RAC instance

alter system set undo_tablespace='<TBS1>' sid='DBV1' scope=spfile;
alter system set undo_tablespace='<TBS2>' sid='DBV2' scope=spfile;
alter system set instance_number=1 sid='DBV1' scope=spfile;
alter system set instance_number=2 sid='DBV2' scope=spfile;
alter system set thread=1 sid='DBV1' scope=spfile;
alter system set thread=2 sid='DBV2' scope=spfile;

This is one time action only. You can consider to set these parameters during creation of your standby database.

4. Switchover

Now we are ready for switchover. Initiate the procedure. After switchover is done, you should have following situation:

1st RAC standby node: primarydb1 standby instance DBV1 of database DBV, DDC file DBV1 (with reversed values SOURCE, DESTINATION)
2nd RAC standby node: primarydb2 <no instance running>, DDC file DBV2 (in original state, untouched by the switchover procedure) 1st RAC primary node: standbydb1 primary instance DBV1, of datbase DBV, DDC file DBV1 (with reversed values SOURCE, DESTINATION)
2nd RAC primary node: standbydb2 <no instance running>

Please note, that after switchover DBV1 instance on standbydb1 is started with parameter cluster_database=false

5. Configure new primary database 

- You need to enable cluster database on new primary instance DBV1 on standbydb1:

alter system set cluster_database=true scope=spfile;

- Add new primary database to oracle grid on standbydb1:

srvctl add database -d DBV -o $ORACLE_HOME 
srvctl add instance -d DBV -i DBV1 -n standbydb1
srvctl add instance -d DBV -i DBV2 -n standbydb2

- Copy DDC file DBV2 from primarydb2 to standbydb2 and edit it manually - you need to swap values for SOURCE and DESTINATION parameters

NOTE: In case you will have always the same standby site configuration for active instances (instance on 1st standby node will be always the active one, while instance on 2nd standby will remain shutdown) this is one-time action only and you do not need to repeat it after next switchover

6. Final steps

- Restart instance DBV1 and start instance DBV2 on new primary site

Important: srvctl status database should give you proper output after restart and both new primary instances should be started up with no problems.

- Now it is time for first sending the archivelogs to your standby site. Run command dbvisit <DDC> first on 2nd primary node, then on 1st.

- Apply logs on standby server - everything should be working. After logs are applied you are ready for next switchover swicthback). There is no need for any action after switchback - you only need again to modify cluster_database=true and restart your new primary.

NOTE: You should always ensure, that srvctl does not startup automatically the standby instance for example after server reboot.

Have more questions? Submit a request

Comments