RAC enabled Standby

Follow

Read our docs RAC Enabled Standby Database


Problem Details

Problem Statement

Enable RAC in Standby database

Applicable Error Code

N/A

Affected Versions

Standby 6 (Oracle >8i), Standby 7 (Oracle 9i, 10g, 11g & 12c (non-multitentant only))

Affected Platforms

Unix Platforms

 

 


Description

Regarding the use of RAC on both primary and Standby, as mentioned Dbvisit Standby version 6 and 7 only supports having a single instance standby (but it can easily be converted to RAC enabled).  However with some scripting and a few manual steps you can still have your standby database as a RAC enable/capable database environment.

As mentioned there is a blog that shows the steps to convert the standby database (which have 2 threads already - as the primary is RAC enabled) to a fully RAC enabled environment.

The important step is to make sure that the ARCHDEST location on the primary and standby RAC clusters are shared storage (ideally ACFS) between the nodes in that specific cluster.

Important: we do not mean have shared storage between the primary and standby clusters, we mean shared storage between the RAC nodes of each cluster.  


Example on Primary RAC cluster with primary node 1 and 2 you should have a shared ARCHDEST folder that is available to both the primary RAC nodes.  Then on the standby RAC cluster with standby node 1 and 2 will have its own shared folder where the ARCHDEST is located. In version 6 and version 7 the ARCHDEST location (path) must still match between primary and standby (the folder path) - Example, if the ARCHDEST on the primary is /dbivisit_archdest, ideally the location on the standby should be /dbvisit_archdest as well - but please note these locations are local to that cluster.

 

Now it is possible to convert the standby database (we only create a single instance - but it does contain 2 threads - standby database) to a RAC enabled database, but it is important that only one instance can run at a time.
You would also need to have two sets of DDC files, one that is configured to use the StandbyNode1 as standby and one set that is configured to have the StandbyNode2 as the standby and you will have to switch to the second set if your standby (only one instance) is started on the second node.

In summary you can convert the standby to a RAC standby with these steps (similar to what was in the blog, but take note the database is a standby and not open or activated.)

The example below is just in one of my test systems where the RAC database is called PROD with instance PROD1 and PROD2. The standby cluster is using running server kiwi21 (standby node 1) and kiwi22 (standby node 2).
Also note the setting of listener parameters (local_listener, remote listener is optional, but recommended) and you need to adjust to your environment.


Solution

Use the following SQL*Plus commands on the standby database to update the server parameter file:

alter system set cluster_database=true scope=spfile sid='*';

alter system set instance_number=1 scope=spfile sid='PROD1';

alter system set instance_number=2 scope=spfile sid='PROD2';

alter system set thread=1 scope=spfile sid='PROD1';

alter system set thread=2 scope=spfile sid='PROD2';

alter system set undo_tablespace='UNDOTBS1' scope=spfile sid='PROD1';

alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='PROD2';

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.40.73)(PORT=1521))' scope=both sid='PROD1';

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.40.74)(PORT=1521))' scope=both sid='PROD2';

alter system set remote_listener='kiwi212-scan:1521' scope=both sid='*';

 

Make sure the spfile/pfile is accessible from both standby nodes and that the password file is on the standby node 1 and node 2 (make sure the $ORACLE_BASE/admin/DB_NAME/adump exist on second node)

To add the database to the clusterware:

srvctl add database -d PROD -o /u01/app/oracle/product/11.2.0/db_1 -c RAC -a "DATA,FRA" -p +DATA/PROD/spfilePROD.ora

srvctl add instance -d PROD -i PROD1 -n kiwi21

srvctl add instance -d PROD -i PROD2 -n kiwi22

srvctl config database -d PROD

The standby cluster can now for example be started with:

srvctl start database -d PROD -o mount

*** Now it is important that from Dbvisit Standby point of view when you want to perform a Graceful Switchover, that the instance on the 2nd node (the one you are not failing over to) is shutdown. Otherwise the GS process will fail. 
If you are planning on switching over to the 2nd node, you have to make sure you use/test/run dbvisit also using the 2nd set of DDC files.

I would suggest you test this type of setup in a test environment before you implement it into a production environment and test it with different scenarios.

Release of Dbvisit Standby version 8 supports both RAC primary and RAC standby for Oracle 11g & 12c only.

 

 

Have more questions? Submit a request

Comments