12c RAC to RAC Dataguard configuration !!!

Home »  12c »  12c RAC to RAC Dataguard configuration !!!

In this article, we will see the steps to configure 12c RAC to RAC Dataguard configuration. Here our Primary and Standby databases – both will be RAC database.

Here, the Primary database data files are stored on ACFS file system whereas for Standby database we will be using ASM as storage. We will be using RMAN Active Database Duplicate feature to create the Standby database.

Below are Server and Database details:

We will perform below steps,
– Perform prerequisites for Dataguard configuration
– Restore Primary RAC database as a single instance database on Standby server using RMAN Active Database
– Convert the standby database to RAC database



• Prerequisites for DataGuard configuration

Perform below steps on Primary Database

– Enable archivelog mode

If archivelog mode is not already enabled then you can enable it as shown below. First stop the RAC database.

Now start any one of the instance of the RAC database and change the mode to archivelog.

Again start the database.


– Enable force logging mode

FORCE LOGGING forces redo to be generated even when NOLOGGING operations are executed. Temporary tablespaces and temporary segments are not logged.
Execute below command to enable force logging. Login to any instance and execute the below query.


– Configure Standby Redo Log files

Standby Log Files are used by a standby database to store the redo it receives from the primary database. Create standby redo logs for both the instances.


– Configure “standby_file_management” parameter

We have to make sure that when we add/drop data files on our primary database, then those files are also added/dropped in the standby database automatically. This is done by setting the below parameter.


– Configure “remote_login_passwordfile” parameter

Make sure that the remote_login_passwordfile value is set to ‘EXCLUSIVE’. It should show the value as EXCLUSIVE. If its not set then execute the below query,

[bounce the database for changes to take effect]


– Configuring parameters required for Redo Transport

Our next step is to ensure that Redo is archived and transported appropriately. For this we will need to configure below parameters.

1) To enable archiving of redo on our primary we use the following command.

2) Next we configure the transport of the redo to the Standby database.

3) The next parameter is FAL_SERVER. This specifies where the database should look for missing archive logs if there is a gap in the logs. It is used when the database is in the standby role and has a gap in the redo it has received.

4) We then need to let the database know what all databases are in the Data Guard configuration.


– Create initialization parameter file for Standby database

Log in to primary database and create a pfile which we will use to start the Standby database

You will now need to make the following changes to the new pfile we created
a) Change the ‘audit_file_dest’ and ‘dispatchers’ parameters – change the value from ‘orcldb’ to ‘orcldbs’.
b) Change the db_unique_name to ‘orcldbs’ in ‘log_archive_dest_1’ parameter.
c) Change the service and db_unique_name to ‘orcldb’ in ‘log_archive_dest_2’ parameter.
d) Change ‘fal_server’ to the primary SID.
e) Add the following parameter – db_unique_name=’orcldbs’

We have also changed below parameters since the source and target location are different.
a) ‘control_files’ – changed path of control files to point to ASM disk groups
b) ‘db_create_file_dest’ – changed value to ASM disk group
c) ‘db_create_online_log_dest_1’ and ‘db_create_online_log_dest_2’ – Changed value to ASM disk groups

Add below parameters if your source and target locations are different.
a) db_file_name_convert=(‘/u01/app/acfs/data’,’+DATA’)
b) log_file_name_convert=(‘/u01/app/acfs/ctrl_log1′,’+CTRL_LOG1′,’/u01/app/acfs/ctrl_log2′,’+CTRL_LOG2’)

Once the pfile for Standby database is ready, copy it over to the standby server and change the name to Standby SID.


– Copy the password file from Primary to Standby

You can check the password file location from 12c onwards using the “srvctl config” command.

From the Primary server, copy the password file to the standby server and place it in “$ORACLE_HOME/dbs” directory and change the name to Standby SID.


– SQL*Net Configuration

Add TNS entries of Standby database in tnsnames.ora of all the nodes of Primary RAC server.


Perform below steps on Standby Server

Since we will be restoring the Primary database as a Single instance on Standby server, below changes are done on only 1st node of the Standby RAC cluster.

– Create adump directory

On your standby server create the adump directory. Reference can be taken from the pfile created for Standby database.


– SQL*Net Configuration

Before we create the standby database, we need to ensure the databases in our configuration will be able to communicate with each other. This is required before creating the standby database if we want to use the RMAN “duplicate from active database” feature. For this we will need to configure both the listener.ora and tnsnames.ora files on Standby site.

Configure TNSNAMES.ora file:


Configure LISTENER.ora: Since the listener.ora is configured in GRID_HOME, we have to login as grid user and configure static listener configuration in the listener.ora file for standby database. Add entry similar to below in listener.ora file. The reason for this is that our standby database will be in nomount stage. In NOMOUNT stage, the database instance will not self-register with the listener, so you must tell the listener it is there.

Now stop and start the LISTENER using srvctl command

Check the status of LISTENER using “lsnrctl status” command. The output should show the new standby database name.


– Add your standby database entry in “oratab” file

Add the new Standby database entry in “/etc/oratab” file as shown below.



• Start the Standby database and run RMAN Active Database Duplication

– Start Standby database in nomount mode

Now that the prerequisites are completed, next step is to start the Standby database in nomount mode using the pfile.

Now that the Standby instance is in nomount stage, try to connect to the database as SYS user from both Primary and Standby server.


– Create spfile for Standby database

Now that the databases are able to communicate with each other, we will create spfile for the Standby database using the pfile.


– Create directories on Standby server

Login as “grid” user and create the below directories in ASM diskgroup.



– Run RMAN active database from Primary server

Login to Primary server as oracle user, connect to both Primary and Standby databases as shown below and run the RMAN active database duplication command.


– Start and Verify Redo Apply on Standby database

Start the Redo Apply

Now that the Standby database is created, first we need to check whether the Redo Apply is working or not before proceeding with the next steps. Using the below command start the Redo Apply.

The above command starts the recovery process using the standby logfiles that the primary is writing the redo to. It also tells the standby to return to the SQL command line once the command is complete.


Verifying that Redo Apply is working

You can run the below query to check the status of different processes.

To check whether the Primary and Standby databases are in sync or not, execute below query.

On Primary:

On Standby:



• Convert the standby database to RAC database

Now the next step is to convert the single instance Standby database to RAC database. Below are the steps we need to follow for converting the standby single instance database to RAC database.

– Remove the static entry from listener.ora

Login as grid user and remove the static listener entry that we had added in listener.ora file earlier. Save the changes and restart the local listener.


– Create new pfile

Create a new temporary pfile as shown below and shutdown the database.


– Edit the newly created pfile

We now need to enable all the cluster related parameters in the newly created pfile as shown below.


– Start Standby database and create SPFILE

Now start the standby database using the new pfile. If everything is proper then the instance should get started.

Now that the Standby database has been started with the cluster parameters enabled, we will create Spfile in the central location on ASM diskgroup.


– Create adump directory

Create the “adump” directory on Node 2 which we had created on Node 1 earlier.


– Start the DB using spfile

Now we need to check whether the database gets started using our new spfile which we created on ASM diskgroup. Before proceeding with below steps, first shutdown the database.

Rename the old pfile and spfile in $ORACLE_HOME/dbs directory.


Create new pfile pointing to ASM spfile

SCP the pfile to Node 2


Set the ORACLE_SID and start the DB instance on Node 1


– Add Standby database and instances to Cluster

Now that the database have been started using the spfile on shared location, we will add the database in cluster. Execute the below command to add the database and its instances in the cluster configuration.


– Copy password file from Primary to Standby

From Primary server copy the password file again to the Standby server.

Login as grid user on Standby server and copy the password file to ASM diskgroup as shown below.

Login as oracle user and add the password file location in database configuration using srvctl command.


– Start the Standby RAC database

Before starting the standby RAC database, shutdown the already running instance.

Start the database using srvctl command.


– Start the recovery on Standby database

Now that the standby single instance is converted to standby RAC database, the final step is to start the recovery (MRP) process. Using the below command start the recovery on Standby.


Thats’s it, we have finished the RAC to RAC dataguard configuration in 12c.


Tags : ,,, ,

5 Comments so far:

  1. McKell Anderson says:

    Excellent post. Thank you very much.

  2. T-Stone says:

    Great port. Thanks a lot. It helped me a lot.
    Can you also add DGMGRL to this pls.


  3. kashif says:

    At the 3rd last step isn’t should be in mount mode as;

    $ srvctl start database -d orcldbs -o mount

Leave a Reply

Your email address will not be published. Required fields are marked *