
This step by step guide will take you through the steps to install and configure Oracle Grid Infrastructure 12c and Database 12c including RAC to RAC Data Guard and Data Broker configuration in a Primary and Physical Standby environment for high availability.
Prerequisites
You need to download the following software if you don’t have already.1. Oracle Enterprise Linux 6 (64-bit) or Red Hat Enterprise Linux 6 (64bit)
2. Oracle Grid Infrastructure 12c (64-bit)
3. Oracle Database 12c (64-bit)
Environment
You need four (physical or virtual) machines with 2 network adapters each with at least 2GB memory on each machine.
Installing Oracle Enterprise Linux 6
To begin installation, power on your first machine booting from Oracle Linux media and install it as basic server. More specifically, it should be a server installation with a minimum of 4GB swap, separate partition for /u01 with minimum 20GB space, firewall disabled, SELinux set to permissive and the following package groups installed.

Base System > Base
Base System > Compatibility libraries
Base System > Hardware monitoring utilities
Base System > Large Systems Performance
Base System > Network file system client
Base System > Performance Tools
Base System > Perl Support
Servers > Server Platform
Servers > System administration tools
Desktops > Desktop
Desktops > Desktop Platform
Desktops > Fonts
Desktops > General Purpose Desktop
Desktops > Graphical Administration Tools
Desktops > Input Methods
Desktops > X Window System
Applications > Internet Browser
Development > Additional Development
Development > Development Tools
If you are on physical machine then you have to install all four machines one by one but if you are on virtual platform then you have an option to clone your first machine with minor changes of ip addresses and hostname of cloned machines.
Click Reboot to finish the installation.

Preparing Oracle Enterprise Linux 6
Since we have completed Oracle Linux installation, now we need to prepare our Linux machines for Gird infrastructure and Database installation. Make sure internet connection is available to perform the following tasks.
Configuring Network
In this section, we will set up networking for our database servers on both primary and standby. Make sure you replace ip address, gateway, netmask, hostname and domain to reflect yours.Configuring public network interface on primary node pdbsrv1:
Save and close file when you are finished.
Save and close file when you are finished.
Next, configuring private network interface on primary node pdbsrv1:
Save and close file when you are finished.
Next, add the following entries in /etc/hosts file on primary node pdbsrv1:
Save and close file when you are finished.
Configuring public network interface on primary node pdbsrv2:
Save and close file when you are finished.
Save and close file when you are finished.
Next, configuring private network interface on primary node pdbsrv2:
Save and close file when you are finished.
Next, add the following entries in /etc/hosts file on primary node pdbsrv2:
Save and close file when you are finished.
Now set the hostname with below command on primary node pdbsrv1:
and on pdbsrv2:
Configuring public network interface on stand-by node sdbsrv1:
Save and close file when you are finished.
Save and close file when you are finished.
Configuring private network interface on stand-by node sdbsrv1:
Save and close file when you are finished
Next, add the following entries in /etc/hosts file on stand-by node sdbsrv1
Save and close file when you are finished.
Configuring public network interface on stand-by node sdbsrv2:
Save and close file when you are finished.
Save and close file when you are finished.
Configuring private network interface on standby node sdbsrv2:
Save and close file when you are finished.
Next, add the following entries in /etc/hosts file on stand-by node sdbsrv2:
Save and close file when you are finished.
Now set hostname with below command on stand-by node sdbsrv1
and on stand-by node sdbsrv2:
Creating HOST-A Record in DNS
At this stage, you need to create (HOST-A) record in your DNS Server to resolve SCAN against the ip addresses you set for both Primary and Stand-by nodes.PRIMARY SCAN
192.168.10.105 pdbsrv-scan.tspk.com
192.168.10.106 pdbsrv-scan.tspk.com
192.168.10.107 pdbsrv-scan.tspk.com
STANDBY SCAN
192.168.10.115 sdbsrv-scan.tspk.com
192.168.10.116 sdbsrv-scan.tspk.com
192.168.10.117 sdbsrv-scan.tspk.com
When you are done with all of the above steps, proceed with the below.
Configuring SELinux, IPTABLES, NTP
You must disable selinux on all four nodes like below:Now change SELINUX=enforcing parameter to SELINUX=disabled:
Save and close file when you are finished
You should stop firewall/iptables on all four nodes like below:
Stop NTP service on all four nodes like below:
Creating Grid and Database Home
You need to create below directory structure on all four nodes:Set same password for user oracle on all four nodes by typing the below command:
Profile Environment
You need to add these environment variables on all four nodes in bash_profile in oracle user. Make sure you replace the highlighted text on each node with yours:When you are log in with oracle, edit .bash_profile and add following enteries at the end of the file:
Save and close file when you are finished.
Now create grid_env file with below parameters:
Add below parameters in it:
Save and close file when you are finished.
Next, create db_env file with below parameters:
Add below parameters in it:
Save and close file when you are finished.
The environment variables from .bash_profile, grid_env and db_env on all four nodes will look similar to like as shown in image below.

If /dev/shm size is less than 4GB then increase and remount it using the below command.
To make it persistent even when system reboot, you need to modify /etc/fstab accordingly
Save and close file when you are finished.
If you don’t increase, and keeping less than 4GB it will cause an error during prerequisites check of Grid installation.
You must be root user to perform below step:
We are done with perquisites on all four nodes and now moving to next step for grid installation.
To make it persistent even when system reboot, you need to modify /etc/fstab accordingly
Save and close file when you are finished.
If you don’t increase, and keeping less than 4GB it will cause an error during prerequisites check of Grid installation.
Creating Diskgroup
We have already set up openfiler as an iscsi shared storage for this lab and now we need to create diskgroup of that shared storage on primary node PDBSRV1 and later we will initialize and scan same diskgroup on PDBSRV2.You must be root user to perform below step:
Now initialize and scan same diskgroup on primar node PDBSRV2 using the below command.
Now, we will create diskgroup on our stand-by node SDBSRV1 and later we will initialize and scan same diskgroup on SDBSRV2.
Now initialize and scan same diskgroup on SDBSRV2 using the below command.We are done with perquisites on all four nodes and now moving to next step for grid installation.
Installing Grid Infrastructure 12c - Primary Site
Installing Grid Infrastructure 12c - Primary Site
We have completed the preparation of all four machines and ready to start Oracle grid infrastructure 12c installation. You should have either VNC or Xmanager installed on your client machine for graphical installation of grid/database. In our case, we have windows 7 client machine and we are using Xmanager.
Now, copy grid infrastructure and database software on your primary node PDBSRV1 and extract it under /opt or any other directory of your choice. In our case, we have CD Rom media and we will extract it under /opt.
Log in with root user on your primary node PDBSRV1 and perform the following steps.
Copy cvuqdisk-1.0.9-1.rpm to other three nodes under /opt and install it on each node one by one
Now, switch to oracle user and perform grid installation on your primary node PDBSRV1
Run grid_env to set environment variable for grid infrastructure installation.
Now, execute the following command from the directory you have extracted grid in to begin the installation.
Follow the screenshots to set up grid infrastructure according to your environment.
Select"Skip Software Update" Click Next
Select "Install and Configure Oracle Grid Infrastructure for a Cluster" Click Next
Select "Configure a Standard Cluster" Click Next
Choose "Typical Installation" Click Next
Change the "SCAN Name" and add secondary host in the cluster, enter oracle user password then Click Next.
Verify destination path, enter password and choose "dba" as OSASM group. Click Next
Click "External" for redundancy and select at least one disk or more and Click Next.
Keep the default and Click Next
Keep the default and Click Next
It is safe to ignore since i can not add more than 4GB of memory. Click Next
Verify and if you are happy with the summary, Click Install.
Copy cvuqdisk-1.0.9-1.rpm to other three nodes under /opt and install it on each node one by one
Now, switch to oracle user and perform grid installation on your primary node PDBSRV1
Run grid_env to set environment variable for grid infrastructure installation.
Now, execute the following command from the directory you have extracted grid in to begin the installation.
Follow the screenshots to set up grid infrastructure according to your environment.
Select"Skip Software Update" Click Next

Select "Install and Configure Oracle Grid Infrastructure for a Cluster" Click Next

Select "Configure a Standard Cluster" Click Next

Choose "Typical Installation" Click Next

Change the "SCAN Name" and add secondary host in the cluster, enter oracle user password then Click Next.

Verify destination path, enter password and choose "dba" as OSASM group. Click Next

Click "External" for redundancy and select at least one disk or more and Click Next.

Keep the default and Click Next

Keep the default and Click Next

It is safe to ignore since i can not add more than 4GB of memory. Click Next

Verify and if you are happy with the summary, Click Install.

When it says execute the root script, go back to PDBSRV1 and PDBSRV2 command line terminal to execute the following scripts.
You must be root and execute both scripts on PDBSRV1 first:
You must be root and execute both scripts on PDBSRV1 first:
When you are finished on PDBSRV1, execute both scripts on PDBSRV2:
When script execution completed, Click OK.

Click close.

At this stage, Grid infrastructure 12c installation completed on primary nodes. You can verify the status of the installation using the following commands.
Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.
Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.
Installing Oracle Database 12c - Primary Site
Since we have completed grid installation, now we need to install oracle database 12c by executing runInstaller command from the directory you have extracted the database in.
Select the "Install database software only" option, then click the "Next" button.
Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.
Make sure both nodes are selected, then click the "Next" button.
Select the required languages, then click the "Next" button.
Select the "Enterprise Edition" option, then click the "Next" button.
Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.1.0/db_1" as the software location, then click the "Next" button.
Select the desired operating system groups, then click the "Next" button.
Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.
If you are happy with the summary information, click the "Install" button.
Wait while the installation takes place.
When prompted, execute the configuration script on each node bur pdbsrv1 first with root user then pdbsrv2.
When the scripts have been run on both nodes, click the "OK" button.
Click the "Close" button to exit the installer.
At this stage, database installation completed on primary nodes.
You need to adjust few parameters in initSDBRAC.ora file for standby database creation in a data guard environment.
Save and close file when you are finished
Now we need to create the ASM directories on standby node SDBSRV1 using the following commands.
Once the duplication process completed, you need to check whether the Redo Apply is working before proceeding the next steps.
The above command starts the recovery process using the standby logfiles that the primary is writing the redo to. It also tells the standby to return to the SQL command line once the command is complete. Verifying that Redo Apply is working. You can run the below query to check the status of different processes.
To check whether the Primary and Standby databases are in sync or not, execute below query.
On Primary Database:
On Standby Database:
Create new spfile from pfile:
Now start the stand_by database using the newly created pfile like below:
Now that the Standby database has been started with the cluster parameters enabled, we need to create spfile in the central location on ASM diskgroup.
Now we need to check whether the standby database gets started using our new spfile which we have created on ASM diskgroup.
Rename the old pfile and spfile in $ORACLE_HOME/dbs directory as shown below
Now create the below initSDBRAC1.ora file on sdbsrv1 and initSDBRAC2.ora file on sdbsrv2 under $ORACLE_HOME/dbs with the spfile entry so that the instance can start with the newly created spfile.
Save and close file.
Copy initSDBRAC1.ora to sdbsrv2 as $ORACLE_HOME/dbs/initSDBRAC2.ora
Now start the database on standby node sdbsrv1 as shown an example below
Now that the database have been started using the spfile on shared location, we will add the database in cluster. Execute the below command to add the database and its instances in the cluster configuration.
From Primary node pdbsrv1 copy the password file again to the Standby node sdbsrv1.
Login on standby node sdbsrv1 and copy the password file to ASM diskgroup as shown below.
Now we need to tell database where to look for password file using srvctl command as shown an example below
At this point, start the standby RAC database but before starting the standby RAC database, shutdown the already running instance as shown an example below
Now we can start the database using the following command.
Now that the standby single instance is converted to standby RAC database, the final step is to start the recovery (MRP) process using the following command on standby node.
At this stage, we have completed the RAC to RAC data guard configuration but still few more steps needed.
Log in as oracle user on Primary node pdbsrv1 and execute the below commands.
Similarly, change the settings on Standby database server.
Register the primary and standby databases in the broker configuration as shown an example below
Now we need to enable the broker configuration and check if the configuration is enabled successfully or not.
Note: If you encounter an error "ORA-16629: database reports a different protection level from the protection mode" then perform the following steps.
Once the broker configuration is enabled, the MRP process should start on the Standby database server.
The output of above command shows that the MRP process is started on instance1. You can login to standby Node sdbsrv1 server and check whether MRP is running or not as shown below.
Now that the MRP process is running, login to both Primary and Standby database and check whether the logs are in sync or not.
Below are some extra commands of DGMGRL which you can use and check status of database.
Perform switchover activity from primary database (PDBRAC) to physical standby database (SDBRAC) using DGMGRL prompt.
Uncheck the security updates checkbox and click the "Next" button and "Yes" on the subsequent warning dialog.

Select the "Install database software only" option, then click the "Next" button.

Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.

Make sure both nodes are selected, then click the "Next" button.

Select the required languages, then click the "Next" button.

Select the "Enterprise Edition" option, then click the "Next" button.

Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.1.0/db_1" as the software location, then click the "Next" button.

Select the desired operating system groups, then click the "Next" button.

Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.

If you are happy with the summary information, click the "Install" button.

Wait while the installation takes place.

When prompted, execute the configuration script on each node bur pdbsrv1 first with root user then pdbsrv2.
When the scripts have been run on both nodes, click the "OK" button.

Click the "Close" button to exit the installer.

At this stage, database installation completed on primary nodes.
Creating a Database - Primary Site
Since we have completed database installation on our primary nodes, its time to create a database by executing the following command.
Select the "Create Database" option and click the "Next" button.
Select the "Advanced Mode" option. Click the "Next" button.
Select exactly what shown in image and Click Next.
Enter the "PDBRAC" in database name and keep the SID as is.
Click Next
Make sure both nodes are selected and Click Next
Keep the default and Click Next
Select "Use the Same Administrative password for All Accounts" enter the password and Click Next
Keep the default and Click Next.
Select "Sample Schema" we need it for testing purpose later and Click Next
Increase "Memory Size" and navigate to "Sizing" tab
Increase the "Processes" and navigate to "Character Sets" tab
Select the following options and Click "All Initialization Parameters"
Define "PDBRAC" in db_unique_name and click Close.
Click Next
Select the below options and click Next.
If you happy with the Summary report then Click Finish.
Database creation process started, it will take several time to complete.
Click Exit
Click Close
We have successfully created a database on Primary nodes (pdbsrv1, pdbsrv2). We can check database status by executing the following command.
Log in to sdbsrv1 using oracle user and execute the runInstaller command to begin the grid installation.
Follow the same steps you have performed during installation on primary nodes with minor changes as show in image below.
Enter "SCAN Name" and add secondadry node "sdbsrv1" enter oracle user password in "OS Password" box and Click Next
Once the grid installation completed, we can check the status of the installation using the following commands.
Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.

You do not need to run "dbca" to create database on Standby nodes. Once the database installation completed, we can start configuring data guard at Primary nodes first.
Now backup password file from primary database using the following commands. This will be required later on standby database configuration.
Now we need to modify/update $ORACLE_HOME/network/admin/tnsnames.ora file on primary node 1 as shown an example below
Select the "Create Database" option and click the "Next" button.

Select the "Advanced Mode" option. Click the "Next" button.

Select exactly what shown in image and Click Next.

Enter the "PDBRAC" in database name and keep the SID as is.
Click Next

Make sure both nodes are selected and Click Next

Keep the default and Click Next

Select "Use the Same Administrative password for All Accounts" enter the password and Click Next

Keep the default and Click Next.

Select "Sample Schema" we need it for testing purpose later and Click Next

Increase "Memory Size" and navigate to "Sizing" tab

Increase the "Processes" and navigate to "Character Sets" tab

Select the following options and Click "All Initialization Parameters"

Define "PDBRAC" in db_unique_name and click Close.
Click Next

Select the below options and click Next.

If you happy with the Summary report then Click Finish.

Database creation process started, it will take several time to complete.

Click Exit
Click Close

We have successfully created a database on Primary nodes (pdbsrv1, pdbsrv2). We can check database status by executing the following command.
Installing Grid Infrastructure 12c - Standby Site
Since we have already installed all perquisites on our Standby site nodes (sdbsrv1, sdbsrv2) for grid/database installation, we can start grid installation straightaway.Log in to sdbsrv1 using oracle user and execute the runInstaller command to begin the grid installation.
Follow the same steps you have performed during installation on primary nodes with minor changes as show in image below.
Enter "SCAN Name" and add secondadry node "sdbsrv1" enter oracle user password in "OS Password" box and Click Next

Once the grid installation completed, we can check the status of the installation using the following commands.
Note: If you found ora.oc4j offline then you can enable and start it manually by executing the following command.
Installing Database 12c - Standby Site
You can begin database 12c installation by following the same steps you have performed during installation on primary nodes with minor changes as shown in images below.
You do not need to run "dbca" to create database on Standby nodes. Once the database installation completed, we can start configuring data guard at Primary nodes first.
Data Guard Configuration - Primary Site
Login to PDBSRV1 using oracle user and perform the following tasks to prepare data guard configuration.Now backup password file from primary database using the following commands. This will be required later on standby database configuration.
Now take the primary database backup using the following commands
Save and close file when you are finished.
Next, copy the tnsnames.ora from PDBSRV1 to all the three nodes under $ORACLE_HOME/network/admin in order to keep the same tnsnames.ora on all the nodes.
Next, copy initSDBRAC.ora and orapwsdbrac from primary node PDBSRV1 to standby node SDBSRV1
Copy /u01/app/oracle/backup from primary node pdbsrv1 to standby node sdbsrv1 under the same location as primary
Data Guard Configuration - Standby Site
Log in to SDBSRV1, SDBSRV2 using oracle user and perform the following tasks to prepare Standby site for data guard configuration.You need to adjust few parameters in initSDBRAC.ora file for standby database creation in a data guard environment.
Save and close file when you are finished
Now we need to create the ASM directories on standby node SDBSRV1 using the following commands.
Creating physical standby database
Log in to stand_by server sdbsrv1 as oracle user, run the RMAN active database duplication command like below:Once the duplication process completed, you need to check whether the Redo Apply is working before proceeding the next steps.
The above command starts the recovery process using the standby logfiles that the primary is writing the redo to. It also tells the standby to return to the SQL command line once the command is complete. Verifying that Redo Apply is working. You can run the below query to check the status of different processes.
To check whether the Primary and Standby databases are in sync or not, execute below query.
On Primary Database:
On Standby Database:
Create new spfile from pfile:
Now start the stand_by database using the newly created pfile like below:
Now that the Standby database has been started with the cluster parameters enabled, we need to create spfile in the central location on ASM diskgroup.
Now we need to check whether the standby database gets started using our new spfile which we have created on ASM diskgroup.
Rename the old pfile and spfile in $ORACLE_HOME/dbs directory as shown below
Now create the below initSDBRAC1.ora file on sdbsrv1 and initSDBRAC2.ora file on sdbsrv2 under $ORACLE_HOME/dbs with the spfile entry so that the instance can start with the newly created spfile.
Save and close file.
Copy initSDBRAC1.ora to sdbsrv2 as $ORACLE_HOME/dbs/initSDBRAC2.ora
Now start the database on standby node sdbsrv1 as shown an example below
Now that the database have been started using the spfile on shared location, we will add the database in cluster. Execute the below command to add the database and its instances in the cluster configuration.
From Primary node pdbsrv1 copy the password file again to the Standby node sdbsrv1.
Login on standby node sdbsrv1 and copy the password file to ASM diskgroup as shown below.
Now we need to tell database where to look for password file using srvctl command as shown an example below
At this point, start the standby RAC database but before starting the standby RAC database, shutdown the already running instance as shown an example below
Now we can start the database using the following command.
Now that the standby single instance is converted to standby RAC database, the final step is to start the recovery (MRP) process using the following command on standby node.
At this stage, we have completed the RAC to RAC data guard configuration but still few more steps needed.
DG Broker Configuration 12c
Since our Primary and Stand_by databases are RAC, we will change the default location of DG Broker files to a centralized location as shown an example belowLog in as oracle user on Primary node pdbsrv1 and execute the below commands.
Similarly, change the settings on Standby database server.
Register the primary and standby databases in the broker configuration as shown an example below
Now we need to enable the broker configuration and check if the configuration is enabled successfully or not.
Note: If you encounter an error "ORA-16629: database reports a different protection level from the protection mode" then perform the following steps.
Once the broker configuration is enabled, the MRP process should start on the Standby database server.
The output of above command shows that the MRP process is started on instance1. You can login to standby Node sdbsrv1 server and check whether MRP is running or not as shown below.
Now that the MRP process is running, login to both Primary and Standby database and check whether the logs are in sync or not.
Below are some extra commands of DGMGRL which you can use and check status of database.
Perform switchover activity from primary database (PDBRAC) to physical standby database (SDBRAC) using DGMGRL prompt.
Conclusion
We have completed the oracle 12c rac to rac database installation and configuration including data guard configuration for high availability in a primary and physical standby environment.
very nice, thanks
ReplyDeleteImpressive work
ReplyDeleteI am unable to run (rman duplicate database for standby from active database nofilenamecheck) however it still works if i run rman duplicate using primary backup.
ReplyDeleteAny suggestion?
Nice explanation...
ReplyDeleterman target sys/abcd1234@PDBRAC auxiliary sys/abcd1234@SDBRAC
ReplyDeleteRecovery Manager: Release 12.1.0.2.0 - Production on Thu Sep 19 17:45:39 2019
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: PDBRAC (DBID=194079915)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04006: error from auxiliary database: ORA-12545: Connect failed because target host or object does not exist
-------------
[oracle@pdbsrv1 ~]$tnsping SDBRAC
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 19-SEP-2019 22:25:58
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = sdbsrv-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SDBRAC)))
TNS-12545: Connect failed because target host or object does not exist
-------------
[oracle@sdbsrv1 ~]$ tnsping PDBRAC
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 19-SEP-2019 22:26:47
Copyright (c) 1997, 2014, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = pdbsrv-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PDBRAC)))
TNS-12545: Connect failed because target host or object does not exist
[oracle@sdbsrv1 ~]$
check your tnsnames.ora and listener.ora parameters.
Delete