Oracle Real Application Cluster (RAC) is a well-known product among
Oracle’s solutions to maintain high availability of your business data.
Oracle RAC allows the work load to be shared among all the cluster
nodes, with N-1 tolerance configuration in case of node failures, where N
is the total number of nodes. Oracle RAC is constantly improving on
every version and this time was not different. The new 12.1.0.1 version
incorporates two properties called “Flex ASM” and “Flex Cluster” that
gives support to the demand requirements on Cloud Computing oriented
environments.
Oracle RAC 12c introduces two new concepts:
Hub Nodes: They are connected among them via private network and have direct access to the shared storage just like previous versions. These nodes are the ones that access the Oracle Cluster Registry (OCR) and Voiting Disk (VD) directly.
Leaf Nodes: These nodes are lighter and are not connected among them, neither access the shared storage like the Hub Nodes. Each Leaf Node communicates with the Hub Node that is attached to, and its connected to the cluster via the Hub Node that is linked to.
This topology allows loosely coupled application servers to form a cluster with tightly coupled database servers. Tightly coupled servers are Hub Servers that share storage for database, OCR and Voting devices as well as peer-to-peer communication with other Hub Servers in the cluster. A loosely coupled server is a Leaf Server that has a loose communication association with a single Hub Server in the cluster and does not require shared storage nor peer-to-peer communication with other Hub or Leaf Servers in the cluster, except to communicate with the Hub to which it is associated. In 12.1.0.1, Leaf Servers are designed for greater application high availability and multi-tier resource management.
Prior to Oracle 12c, for a database instance to use ASM it is expected that the ASM instance must be up and running on all nodes before the database instance is brought up. Failure of ASM instance to come-up means that database instance using ASM at the storage level cannot be brought up. This literally means that the database instance is not accessible immaterial of the technologies put in use i.e. RAC, ASM and Shared Storage.
With the launch of Oracle 12c the above constraint has been addressed with the feature called Oracle Flex ASM which primarily has a feature to fail over to another node in the cluster. Essentially a Hub and Leaf architecture, the connection of a failed node is seamlessly transferred to another participating node by way of a replacement ASM instance by Oracle Clusterware. The number of ASM instances running in a given cluster is called ASM cardinality with a default value of 3. However the cardinality value can be amended using the Clusterware command.
Figure 1: Depicts a typical Oracle flex cluster with four Leaf nodes and two Hub nodes. In a nutshell Oracle Flex Cluster requires Oracle Flex ASM.
ASM Instance Failure on Oracle Flex ASM configuration:
Summary: The database instance is strongly linked to the ASM instance. If an ASM instance fails so will the database instance on the same node.
Managing Flex ASM:
Oracle RAC 12c introduces two new concepts:
Hub Nodes: They are connected among them via private network and have direct access to the shared storage just like previous versions. These nodes are the ones that access the Oracle Cluster Registry (OCR) and Voiting Disk (VD) directly.
Leaf Nodes: These nodes are lighter and are not connected among them, neither access the shared storage like the Hub Nodes. Each Leaf Node communicates with the Hub Node that is attached to, and its connected to the cluster via the Hub Node that is linked to.
This topology allows loosely coupled application servers to form a cluster with tightly coupled database servers. Tightly coupled servers are Hub Servers that share storage for database, OCR and Voting devices as well as peer-to-peer communication with other Hub Servers in the cluster. A loosely coupled server is a Leaf Server that has a loose communication association with a single Hub Server in the cluster and does not require shared storage nor peer-to-peer communication with other Hub or Leaf Servers in the cluster, except to communicate with the Hub to which it is associated. In 12.1.0.1, Leaf Servers are designed for greater application high availability and multi-tier resource management.
Prior to Oracle 12c, for a database instance to use ASM it is expected that the ASM instance must be up and running on all nodes before the database instance is brought up. Failure of ASM instance to come-up means that database instance using ASM at the storage level cannot be brought up. This literally means that the database instance is not accessible immaterial of the technologies put in use i.e. RAC, ASM and Shared Storage.
With the launch of Oracle 12c the above constraint has been addressed with the feature called Oracle Flex ASM which primarily has a feature to fail over to another node in the cluster. Essentially a Hub and Leaf architecture, the connection of a failed node is seamlessly transferred to another participating node by way of a replacement ASM instance by Oracle Clusterware. The number of ASM instances running in a given cluster is called ASM cardinality with a default value of 3. However the cardinality value can be amended using the Clusterware command.
Oracle Flex Cluster
Architecturally Oracle Flex Cluster comprises of a Hub and Leaf architecture where in only the Hub nodes will only have direct access to Oracle Cluster Registry (OCR) and Voting Disk (VD). However application can access the database via Leaf nodes without ASM instance NOT running on Leaf nodes. The connection to the database is through Hub making it transparent for the application.Figure 1: Depicts a typical Oracle flex cluster with four Leaf nodes and two Hub nodes. In a nutshell Oracle Flex Cluster requires Oracle Flex ASM.
Oracle RAC 12c with Oracle Flex ASM
Standard Oracle Flex ASM configuration:ASM Instance Failure on Oracle Flex ASM configuration:
1. Log into RAC Database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ hostname oel6-112-rac1.localdomain
2. Check the status of ASM & RAC Database instances
[oracle@oel6-112-rac1 Desktop]$ ps -ef | grep pmon oracle 3325 1 0 17:39 ? 00:00:00 asm_pmon_+ASM1 oracle 3813 1 0 17:40 ? 00:00:00 mdb_pmon_-MGMTDB oracle 5806 1 0 17:42 ? 00:00:00 ora_pmon_orcl1 oracle 6193 1 0 17:42 ? 00:00:00 apx_pmon_+APX1
3. Check the status of the ASM instance in RAC Database instances from instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm ASM is running on oel6-112-rac2,oel6-112-rac1
4. Check the status of Cluster in instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
5. Command to check whether Oracle Flex ASM is enabled or not (rac1)
[oracle@oel6-112-rac1 Desktop]$ asmcmd ASMCMD> showclustermode ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal
6. Command to change the cardinality of the ASM (rac1)
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm -detail ASM is running on oel6-112-rac2,oel6-112-rac1 ASM is enabled. [oracle@oel6-112-rac1 Desktop]$ srvctl config asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM
7. Command to check whether Oracle Flex ASM is enabled or not (rac2)
[oracle@oel6-112-rac2 Desktop]$ asmcmd ASMCMD> showclustermode ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal ASMCMD> exit
8. How to change the cardinality of the ASM (rac2)
[oracle@oel6-112-rac2 Desktop]$ srvctl config asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM
9. Bringing Down the ASM instance in RAC Database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ srvctl stop asm -node oel6-112-rac1 -stopoption abort -force
10. Check the status of ASM instance in RAC Database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm PRCR-1070 : Failed to check if resource ora.asm is registered Cannot communicate with crsd
11. Checking the status of cluster services in RAC Database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ crsctl check cluster CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
12. Checking the status of ASM & RAC Database in instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ ps -ef | grep pmon oracle 3813 1 0 17:40 ? 00:00:00 mdb_pmon_-MGMTDB oracle 5806 1 0 17:42 ? 00:00:00 ora_pmon_orcl1 oracle 6193 1 0 17:42 ? 00:00:00 apx_pmon_+APX1Note: Here a database instance is associated with the specific ASM instance running in the specific node. If in case due to some reason if the ASM instance was unable to be brought up/services goes down, still the database instance can be brought up as the database instance will look for ASM instance running in the same cluster. Figure 3 depicts the high available feature of Flex ASM.
13. Check the status of RAC Database instance running without ASM instance in RAC database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? orcl1 ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/db_1 The Oracle base remains unchanged with value /u01/app/oracle
14. Log into Database instance from RAC database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ sqlplus /nolog SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 25 18:24:36 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. SQL> connect sys/oracle@orcl as sysdba Connected. SQL> select instance_name,instance_number from gv$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl2 2 orcl1 1 SQL> select instance_name,instance_number from v$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl2 2 SQL> connect sys/oracle@orcl as sysdba Connected. SQL> select instance_name,instance_number from gv$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl1 1
15. Connecting to ASM instance of RAC Database instance2 (rac2) from RAC Database instance1 (rac1)
[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? +ASM2 ORACLE_HOME = [/home/oracle] ? /u01/app/12.1.0/grid The Oracle base remains unchanged with value /u01/app/oracle [oracle@oel6-112-rac1 Desktop]$ asmcmd --privilege sysasm --inst +ASM2 ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 15342 4782 0 4782 0 Y DATA/ ASMCMD>Summary: The database instance was using a dedicated ASM instance and that ASM instance was forced to stop working simulating a failure, so the database instance reconnected to an existent ASM instance on another node, for this example node 2 (rac2).
Oracle Database 11.2 or earlier
As mentioned in the introduction above for Oracle 12c, the association of ASM to database instance is specific in nature. This means that if an ASM instance was unable to be brought UP then associated database instance in that node/ASM cannot be brought UP thus making the database inaccessible.1. Log into RAC Database instance1 (rac1)
login as: oracle oracle@192.168.xx.xx's password: Last login: Fri Sep 27 06:05:44 2013
2. Check the status of ASM & RAC Database instances:
[oracle@rac1 ~]$ ps -ef | grep pmon oracle 3053 1 0 05:56 ? 00:00:00 asm_pmon_+ASM1 oracle 3849 1 0 05:57 ? 00:00:00 ora_pmon_flavia1
3. Check the status of ASM instance in RAC Database instance1 (rac1)
[oracle@rac1 ~]$ srvctl status asm ASM is running on rac2,rac1
4. Check the status of Cluster in RAC Database instance1 (rac1)
[oracle@rac1 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
5. Stop the ASM instance in RAC Database instance1 (rac1)
[oracle@rac1 ~]$ srvctl stop asm -n rac1 -o abort -f
6. Check the status of ASM instance in RAC Database instance1 (rac1)
[oracle@rac1 ~]$ srvctl status asm ASM is running on rac2
7. Check the status of ASM & RAC Database instance (rac1)
[oracle@rac1 ~]$ ps -ef | grep pmon oracle 7885 5795 0 06:20 pts/0 00:00:00 grep pmon
Summary: The database instance is strongly linked to the ASM instance. If an ASM instance fails so will the database instance on the same node.
Why use Oracle Flex ASM
- Oracle Flex ASM supports larger LUN sizes for Oracle Database 12c clients.
- Maximum number of Disk Groups supported is 511.
- Flexibility to rename an ASM Disk in a Disk Group.
- ASM instance Patch-level verification
- Patch level verification is disabled during rolling patches
- Replicated Physical Metadata
Network enhancement in Oracle Flex ASM
- In previous versions the cluster requires:
- A public network for client application access
- One or more private networks for inter-node communication within the cluster including ASM traffic
- Flex ASM adds the ASM network, which can be used for communication between ASM and its clients to isolate and offload ASM traffic.ated Physical Metadata
Deploying Flex ASM
Below are screen shots from the Flex ASM Installer.- Choose the option "Advanced Installation"
Three storage options are available:
Standard ASM
Pre 12c ASM configuration mode
Oracle Flex ASM
Recommended
Non-ASM managed storage
Managing Flex ASM:
- No Flex ASM-specific instance parameters are required
- ASM server instances use automatic memory management (AMM)
No comments: