Post

2 followers Follow
1
Avatar

dbvisit standby installation and configuration best practices

Dear Sirs,

this week we had our first dbvisit standby installation.

Our aim was installing dbvisit standby on a RedHat Cluster as source and having a single server as target.


Now we do need to know:
- is supported and certified installing dbvisit standby on a redhat cluster as source?
- are there any technote with steps to follow for installation on a redhat cluster?
- are there any best practice for our scenario?

We look forward to your reply.
thank you.

best ragards
Francesco Vulcano

Nicoletta Fornasari Answered

Official comment

Avatar

A Couple of important notes

  1. Please make sure that whatever cluster software or configuration you are using is a certified combination for Oracle database software and the Oracle Database version. This include the type of storage and filesystem being used for the database. We cannot specify what Filesystem type or option you should use for the database, but it should match what Oracle certifies as a valid filesystem for the database version you are using. This is something to keep in mind when running cluster configurations.

  2. Make sure the Primary and Standby Database Software Editions and Exact versions (patch levels etc) match.

  3. Please use the latest Dbvisit Standby version 7.0.60 -http://www.dbvisit.com/products/standby_latest_changes/

  4. From our point of view we do support the option where the Database is running in a cluster - example something like Oracle Fail-Safe where an Active-Passive cluster is configured. We do not certify all the different types of cluster configurations individually. We do support and certify Oracle RAC configurations specific. We will support you from our product point of view - but it is important that the configuration is a valid supported Oracle configuration - meaning the database version and edition is supported for the cluster configuration you are looking at including filesystem that is used.

There are a few things to keep in mind when setting up this type of configuration.

When using Unix based cluster configuration, you have 2 options:

Option 1: Installing Dbvisit Standby on each node in the cluster:

  • Install Dbvisit Standby as normal on each node in a DBVISIT BASE location such as /usr/dbvisit
  • You can use Dbvnet or SSH (if you have SSH user equivalence configured)
  • Configure a DDC from the Active node. When done, edit the DDC to make use of the HOSTNAME_CMD option, see -https://dbvisit.atlassian.net/wiki/display/UGDS7/Dbvisit+Standby+and+using+Non-RAC+Clusters. What this does is you add an entry to the DDC file that points to a shell script that will echo out the "cluster_name" or "cluster_alias" that will be used (this alias or hostname will always point to the active cluster node). This name that is echoed out by the shell script - is the same value that you will then use for your SOURCE parameter in the DDC. This way when Dbvisit is running, it will execute the script specified by HOSTNAME_CMD and take the output from that as the hostname to use. Make sure you create the script on both primary and standby nodes and make sure on the primary nodes it prints the cluster_name or alias you want to use and on the standby if single server just that hostname or if a cluster as well, the standby cluster name. That way if the script is executed on the primary or standby it will execute the hostname that dbvisit standby will use.
  • Once the DDC is created, make sure you copy it to both the nodes in the cluster as you are not using shared storage.
  • Make sure that the ARCHDEST location ideally is on shared storage and accessible to whichever node is active at the time - this is especially important on the standby server (if it is a cluster)
  • You should then be able to run Dbvisit as normal from the active node in the cluster.
  • Using the command line instead of the GUI (browser in this case is recommended)
  • Using the cron schedular is recommended instead of the GUI browser.

Option 2: Using shared storage and installing Dbvisit Standby only on the shared storage for the cluster.

  • This is a good way of doing this and recommended - on the primary cluster you install Dbvisit Standby on a shared node.
  • In this configuration using SSH as communication method (SSH equivalence between the primary and standby nodes required) might be better suited - unless you can add Dbvnet as a cluster resource so that it always run on the node that is active.
  • As with option 1 you must use the HOSTNAME_CMD option.
  • Configure a DDC from the Active node. When done, edit the DDC to make use of the HOSTNAME_CMD option, see -https://dbvisit.atlassian.net/wiki/display/UGDS7/Dbvisit+Standby+and+using+Non-RAC+Clusters. What this does is you add an entry to the DDC file that points to a shell script that will echo out the "cluster_name" or "cluster_alias" that will be used (this alias or hostname will always point to the active cluster node). This name that is echoed out by the shell script - is the same value that you will then use for your SOURCE parameter in the DDC. This way when Dbvisit is running, it will execute the script specified by HOSTNAME_CMD and take the output from that as the hostname to use. Make sure you create the script on both primary and standby nodes and make sure on the primary nodes it prints the cluster_name or alias you want to use and on the standby if single server just that hostname or if a cluster as well, the standby cluster name. That way if the script is executed on the primary or standby it will execute the hostname that dbvisit standby will use.
  • Make sure that the ARCHDEST location ideally is on shared storage and accessible to whichever node is active at the time - this is especially important on the standby server (if it is a cluster)
  • You should then be able to run Dbvisit as normal from the active node in the cluster.
  • Using the command line instead of the GUI (browser in this case is recommended)
  • Using the cron schedular is recommended instead of the GUI browser.

We recommend you test this configuration on a Test system first before implementing in production so that you can see the steps involved and get familiar with the configuration. In summary these types of configurations are similar to a normal single to single instance configuration, the key is to make sure you use the HOSTNAME_CMD option.

 

 

Charmaigne Bezuidenhout
Comment actions Permalink

Please sign in to leave a comment.

1 comment