
This article describes how to migrate a single instance of Oracle
Database running on Oracle Solaris 10 with Oracle Solaris Cluster 3.3
into a clustered Oracle Solaris 11 environment without requiring any
modification to the database, by using an Oracle Solaris 10 Zones
cluster deployment.
The objective of this article is to show you how to migrate applications
from an Oracle Solaris 10 clustered environment into an Oracle Solaris
11 environment with minimal effort. The following are the steps for this
procedure:
- Preparing the source systems for migration
- Preparing the Oracle Solaris 10 Zones cluster on the target systems
- Installing the zone cluster
- Installing the cluster software in the zone cluster (optional)
- Re-creating the application setup (in this example, the database) on the target systems
- Verifying failover
The Source and Target Configurations
Figure 1 shows a view of the hardware and connectivity for the production source cluster, which is a two-node cluster calledcluster-1
that has the following:- Two SPARC sun4u Sun Fire V240 servers (
db-host-1
anddb-host-2
) - 10 gigabytes of main memory
- Oracle Solaris 10 1/13
- Oracle Solaris Cluster 3.3 3/13
- HA for Oracle data service
- Oracle Database 10g Release 2 (version 10.2.0.5)
- Oracle's Sun ZFS Storage 7420 appliance
Figure 1. Source environment: hardware and connectivity.
Figure 2 shows a logical view of the source cluster, including the resource groups (rg) and resources (rs) for the HA for Oracle data service, as well as the dependencies between them.
Figure 2. Source environment: logical view of resources and resource groups.
Figure 3 shows a simplified view of the hardware and connectivity for the target cluster,
new-phys-cluster
, which is a two-node cluster with two of Oracle's Sun SPARC Enterprise T5120 servers (new-phys-host-1
and new-phys-host-2
).Figure 3. Target environment: hardware and connectivity.
Figure 4 shows a logical view of the target cluster running Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1 software. This environment uses the
solaris10
brand zone cluster to host the HA for Oracle Database 10g Release 2 configuration that is migrated from the source configuration shown in Figure 1.Figure 4. Target environment: logical view.
In the logical view of the source cluster shown in Figure 2, the
oracle-rg
resource group contains the oracle-server-rs
resource for the Oracle server instance; the logical host name resource (db-lh-rs
) that manages logical host db-lh
, which is used by clients to connect to the Oracle server; and the Oracle listener resource (oracle-listener-rs
).The
scal-mnt-rg
resource group contains the oracle-rs
and oradata-rs
scalable mount-point resources, which manage mount points /u01/app/oracle
and /u02/oradata/
, respectively.These components are shown in the following status output.
db-host-1# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ oracle-rg db-host-1 No Online db-host-2 No Offline scal-mnt-rg db-host-1 No Online db-host-2 No Online db-host-1# clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-listener-rs db-host-1 Online Online db-host-2 Offline Offline db-lh-rs db-host-1 Online Online - LogicalHostname online. db-host-2 Offline Offline oracle-server-rs db-host-1 Online Online db-host-2 Offline Offline oracle-rs db-host-1 Online Online db-host-2 Online Online oradata-rs db-host-1 Online Online db-host-2 Online Online
The
oracle-listener-rs
and oracle-server-rs
resources have dependencies of type resource_dependencies_offline_restart
on other resources. For more details on the types of dependencies that are available in Oracle Solaris Cluster, see the r_properties
(5) man page.db-host-1# clresource show -p Resource_dependencies_offline_restart \ oracle-listener-rs === Resources === Resource: oracle-listener-rs Resource_dependencies_offline_restart: oracle-rs db-host-1# clresource show -p Resource_dependencies_offline_restart \ oracle-server-rs === Resources === Resource: oracle-server-rs Resource_dependencies_offline_restart: oracle-rs oradata-rs
The
oracle-listener-rs
and oracle-server-rs
resources have the following extension properties.- Extension properties for the
oracle-listener-rs
resource are as follows:
Listener_name=LISTENER_DB1 ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1
- Extension properties for the
oracle-server-rs
resource are as follows:
ALERT_LOG_FILE=/u02/oradata/admin/testdb1/bdump/alert_testdb1.log ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 CONNECT_STRING=hauser/hauser
db-host-1# clnas show -v -d all === NAS Devices === Nas Device: qualfugu Type: sun_uss userid: osc_agent Project: qualfugu-1/local/oracle_db File System: /export/oracle_db/oradata File System: /export/oracle_db/oracle
File system
/export/oracle_db/oracle
, which is mounted on the cluster nodes at mount point /u01/app/oracle
, is the installation path for the Oracle Database software. File system /export/oracle_db/oradata
, which is mounted on the cluster nodes at mount point /u02/oradata/
, is the path for database files. The following are the /etc/vfstab
entries for these two files systems on both nodes of the cluster.db-host-1# cat /etc/vfstab | grep oracle_db qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3 qualfugu:/export/oracle_db/oradata - /u02/oradata/ nfs - no rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3
The public network is using domain name service (DNS), and DNS is running and configured as follows.
db-host-1# svcs dns/client STATE STIME FMRI online Nov_04 svc:/network/dns/client:default db-host-1# cat /etc/nsswitch.conf # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # # /etc/nsswitch.dns: # # An example file that could be copied over to /etc/nsswitch.conf; it uses # DNS for hosts lookups, otherwise it does not use any other naming service. # # "hosts:" and "services:" in this file are used only if the # /etc/netconfig file has a "-" for nametoaddr_libs of "inet" transports. # DNS service expects that an instance of svc:/network/dns/client be # enabled and online. passwd: files group: files # You must also set up the /etc/resolv.conf file for DNS name # server lookup. See resolv.conf(4). #hosts: files dns hosts: cluster files dns # Note that IPv4 addresses are searched for in all of the ipnodes databases # before searching the hosts databases. #ipnodes: files dns ipnodes: files dns [TRYAGAIN=0] networks: files protocols: files rpc: files ethers: files #netmasks: files netmasks: cluster files bootparams: files publickey: files # At present there isn't a 'files' backend for netgroup; the system will # figure it out pretty quickly, and won't use netgroups at all. netgroup: files automount: files aliases: files services: files printers: user files auth_attr: files prof_attr: files project: files tnrhtp: files tnrhdb: files bash-3.2# cat /etc/resolv.conf domain mydomain.com nameserver 13.35.29.41 nameserver 19.13.8.13 nameserver 13.35.24.52 search mydomain.com
The following is the identity of the
oracle
user.db-host-1# id -a oracle uid=602(oracle) gid=4051(oinstall) groups=4052(dba)
The
oracle
user is configured with the following profile:db-host-1# su - oracle Oracle Corporation SunOS 5.10 Generic Patch January 2005 db-host-1$ cat .profile export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 export ORACLE_SID=testdb1 export PATH=$PATH:$ORACLE_HOME/bin export TERM=vt100
Preparing the Source Systems for Migration
Before you begin the migration, perform the following tasks.- Copy the contents of
/var/opt/oracle/
to NFS location/net/qualfugu/archive/optbkp/
.
db-host-1# cp -rf /var/opt/oracle/ /net/qualfugu/archive/optbkp/
- Unconfigure the Sun ZFS Storage 7420 appliance from the old cluster.
db-host-1# clresourcegroup offline oracle-rg scal-mnt-rg db-host-1# clresource disable -g oracle-rg,scal-mnt-rg + db-host-1# clresourcegroup unmanage oracle-rg scal-mnt-rg db-host-1# clnas remove-dir -d qualfugu-1/local/oracle_db db-host-1# clnas remove qualfugu
- (Optional) Delete both resource groups.
db-host-1# clresourcegroup delete oracle-rg scal-mnt-rg
- Shut down the cluster.
db-host-1# cluster shutdown -y -g0
clnas add
command, and the resource groups can be brought online by using the clresourcegroup online -emM
command (if they were not deleted).Preparing the Oracle Solaris 10 Zones Cluster on the Target Systems
The target two-node clusternew-phys-cluste
r includes the following (as shown in Figure 3 and Figure 4): two Sun SPARC Enterprise T5120 systems (new-phys-host-1
and new-phys-host-2
) running Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1.We will create a new
solaris10
brand zone cluster, cluster-1
,
on the target systems. The zone cluster will host the Oracle Database
service that was configured in the source cluster running in an Oracle
Solaris 10 environment.Note: The procedure below assumes that the two-node target cluster is already set up with the Oracle Solaris Cluster 4.1 software.
- Ensure that the cluster nodes are online.
new-phys-host-1:~# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ new-phys-host-2 Online new-phys-host-1 Online
- Install the
solaris10
brand zone's support package on both nodes.
new-phys-host-1:~# pkg publisher PUBLISHER TYPE STATUS URI solaris origin online
ha-cluster origin online new-phys-host-1:~# pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 new-phys-host-2:~# pkg publisher PUBLISHER TYPE STATUS URI solaris origin online ha-cluster origin online new-phys-host-2:~# pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 - On each node, verify the status of the IPMP group for the public network,
sc_ipmp0
. This interface will be used to configure the node-scope network resource for the zone cluster.
new-phys-host-1:~# ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES sc_ipmp0 sc_ipmp0 ok -- net0 new-phys-host-2:~# ipmpstat -g GROUP GROUPNAME STATE FDT INTERFACES sc_ipmp0 sc_ipmp0 ok -- net0
- Create the following
cluster-1.config
zone configuration file so thesolaris10
brand zone cluster (cluster-1
) can be created in subsequent steps:
This configuration uses the same host names for the cluster nodes and for the logical hosts as were used in the source setup shown in Figure 1.
new-phys-host-1:~# cat /cluster-1.config create -b set zonepath=/zones/cluster-1 set brand=solaris10 set autoboot=true set limitpriv=default,proc_priocntl,proc_clock_highres set enable_priv_net=true set ip-type=shared add net set address=db-lh set physical=auto end add capped-memory set physical=10G set swap=20G set locked=10G end add dedicated-cpu set ncpus=1-2 set importance=2 end add attr set name=cluster set type=boolean set value=true end add node set physical-host=new-phys-host-1 set hostname=db-host-1 add net set address=db-host-1/24 set physical=sc_ipmp0 end end add node set physical-host=new-phys-host-2 set hostname=db-host-2 add net set address=db-host-2/24 set physical=sc_ipmp0 end end add sysid set root_password=ZiitH.NOLOrRg set name_service="DNS{domain_name=mydomain.com name_server=13.35.24.52,13.35.29.41, 19.13.8.13 search=mydomain.com}" set nfs4_domain=dynamic set security_policy=NONE set system_locale=C set terminal=vt100 set timezone=US/Pacific end
Note: Oracle Solaris Cluster 4.1 software supports only theshared-ip
type ofsolaris10
brand zone cluster. Oracle Solaris Cluster 4.1 SRU 3 supports theexclusive-ip
type.
- Configure the new zone cluster.
The name chosen for the new zone cluster is functional. Since the old cluster setup is being preserved, the same name is used for the new zone cluster. (Use the filecluster-1.config
defined in the previous step.)
new-phys-host-1:~# clzonecluster configure -f /cluster-1.config cluster-1
- Verify the zone cluster configuration.
new-phys-host-1:~# clzonecluster verify cluster-1 new-phys-host-1:~# clzonecluster status cluster-1 === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Configured new-phys-host-2 db-host-2 Offline Configured
Installing the Zone Cluster
The following archive types are supported as the source archive for installing asolaris10
brand zone cluster:- A
native
brand zone on an Oracle Solaris 10 system - A
cluster
brand zone on an Oracle Solaris 10 cluster that has the proper patch level - An Oracle Solaris 10 physical system
- An Oracle Solaris 10 physical cluster node
- A
solaris10
zone archive (derived from an installedsolaris10
brand zone) - An Oracle VM Template for Oracle Solaris 10 Zones (which can be downloaded here; this requires some additional steps to be performed to extract the archive)
cluster-1
to create an archive and use it for the zone cluster installation, that
option would involve making sure the original cluster meets the patch
level requirements to ensure the zone cluster installation is
successful.Instead, this document will use a known
solaris10
zone archive (by installing the archive from an Oracle VM Template), to
ensure the results described in this article can be reproduced by
readers on their own systems. This approach includes extra steps for
creating a dummy zone, obtaining the archive out of the zone installed
from the Oracle VM Template, and deleting the dummy zone that was used
to obtain the archive.- Download the Oracle VM Template to either of the two Sun SPARC Enterprise T5120 servers in the target cluster.
See the Oracle VM Template for Oracle Solaris 10 Zones README (which is embedded in the template) for more details.
Also, ensure you have the following components, which are used in this procedure:
/net/qualfugu/archive/
: A directory on the Sun ZFS Storage 7420 appliance that contains the Oracle VM Template and is also used to store the dummy zone archive./net/qualfugu/osc-dir
: A DVD or DVD image path for the Oracle Solaris Cluster 3.3 3/13 software, which is available here, and the relevant patches, which are available on My Oracle Support./net/qualfugu/nas/
: A directory that contains the NAS client package (SUNWsczfsnfs
), which is required to configure the Sun ZFS Storage 7420 appliance for the Oracle Solaris Cluster configuration.
- Install the dummy zone to obtain the archive, as shown below, where
-a 10.134.90.201
is the IP address and/zones/cluster-1
is the root path for the dummy zone.
new-phys-host-1:~# zfs create -o mountpoint=/zones rpool/zones new-phys-host-1:~# cd /net/qualfugu/archive new-phys-host-1:/net/qualfugu/archive#./solaris-10u11-sparc.bin \ -a 10.134.90.201 -i net0 -p /zones/cluster-1 -z s10zone This is an Oracle VM Template for Oracle Solaris Zones. Copyright 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. Checking disk-space for extraction Ok Extracting in /net/qualfugu/archive/bootimage.wuaWaV ... 100% [===============================>] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded Zone. IMAGE: /net/qualfugu/archive/solaris-10u11-sparc.bin ZONE: s10zone ZONEPATH: /zones/cluster-1 INTERFACE: net0 VNIC: vnicZBI43632 MAC ADDR: 2:8:20:92:88:96 IP ADDR: 10.134.90.201 NETMASK: 255.0.0.0 DEFROUTER: # # This file is deprecated. Default routes will be created for any router # addresses specified here, but they will not change when the underlying # network configuration profile (NCP) changes. For NCP-specific static # routes, the '-p' option of the route(1M) command should be used. # # See netcfg(1M) for information about network configuration profiles. TIMEZONE: US/Pacific Checking disk-space for installation Ok Installing in /cpool/s10zone ... 100% [==========================>]
- Create the archive from the installed zone and copy it
to a location, such as NFS, where it is accessible to the cluster nodes.
(We will use the
/net/qualfugu/archive/
directory as the location where we will copy the archive.)
new-phys-host-1:~# cd /zones new-phys-host-1:~# find cluster-1 -print | cpio -oP@/ | gzip > \ /net/qualfugu/archive/disk-image.cpio.gz
- Destroy the dummy zone.
new-phys-host-1:~# zoneadm -z s10zone uninstall Are you sure you want to uninstall zone s10zone (y/[n])? y new-phys-host-1:~# zonecfg -z s10zone delete Are you sure you want to uninstall zone s10zone (y/[n])? y
- Install the zone cluster by using the obtained zone image.
new-phys-host-2:~# zfs create -o mountpoint=/zones rpool/zones new-phys-host-1:~# clzonecluster install \ -a /net/qualfugu/archive/disk-image cluster-1 Waiting for zone install commands to complete on all the nodes of the zone cluster "cluster-1"...
- Log in to the console of zone
cluster-1
on all nodes of the zone cluster.
new-phys-host-1:~# zlogin -C cluster-1 new-phys-host-2:~# zlogin -C cluster-1
- Boot the zone cluster into the offline/running mode.
new-phys-host-1:~# clzonecluster boot -o cluster-1 Waiting for zone reboot commands to complete on all the nodes of the zone cluster "cluster-1"...
- After the zone boots, make sure system configuration was completed.
If the system configuration was not completed, complete any pending system configuration.
new-phys-host-1:~# clzonecluster status cluster-1 === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Running new-phys-host-2 db-host-2 Offline Running
Installing the Cluster Software in the Zone Cluster (Optional)
Note: You can skip this step if the archive contains cluster software, for example, if the archive is from an Oracle Solaris 10 physical cluster node or acluster
brand zone on an Oracle Solaris 10 system.- Install the Oracle Solaris Cluster 3.3 software from the DVD or the downloaded DVD image from the global zone.
new-phys-host-1:~# clzonecluster install-cluster \ -d /net/qualfugu/osc-dir/ \ -p patchdir=/net/qualfugu/osc-dir,patchlistfile=plist-sparc \ -s all cluster-1 Preparing installation. Do not interrupt ... Installing the packages for zone cluster "cluster-1" ...
Where:
-d
specifies the location of the cluster software DVD image.-p patchdir
specifies the location of the patches to install along with the cluster software. The location must be accessible to all nodes of the cluster.patchlistfile
specifies the file that contains the list of patches to install along with cluster software inside the zone cluster. The location must be accessible to all nodes of the cluster. In the following example, the patch listplist-sparc
is as follows:
new-phys-host-1:~# cat /net/qualfugu/osc-dir/plist-sparc 145333-15
-s
specifies which agent packages to install along with core cluster software. In this example,all
is specified to install all the agent packages.
- Reboot the zone cluster to boot the zone into online/running mode.
new-phys-host-1:~# clzonecluster reboot cluster-1
- Verify that the zone cluster is now in online/running mode.
new-phys-host-1:~# clzonecluster status cluster-1 === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- cluster-1 solaris10 new-phys-host-1 db-host-1 Online Running new-phys-host-2 db-host-2 Online Running
Re-creating the Application Setup on the Target Systems
- Configure the Sun ZFS Storage 7420 appliance for the zone cluster that was used in the source cluster setup.
Ondb-host-1
, install the required NAS software package in the zones of the zone cluster.
new-phys-host-1:~# zlogin cluster-1 [Connected to zone 'cluster-1' pts/2] Last login: Mon Nov 5 21:20:31 on pts/2 Oracle Corporation SunOS 5.10 Generic Patch January 2005 db-host-1# cd /net/qualfugu/nas/ db-host-1# pkgadd -d . SUNWsczfsnfs
- Repeat Step 1 on
db-host-2
. - Add the Sun ZFS Storage 7420 appliance to the cluster configuration.
db-host-1# clnas add -t sun_uss -p userid=osc_agent qualfugu Enter password:
db-host-1# clnas find-dir qualfugu === NAS Devices === Nas Device: qualfugu Type: sun_uss Unconfigured Project: qualfugu-1/local/oracle_db db-host-1# /usr/cluster/bin/clnas add-dir \ -d qualfugu-1/local/oracle_db qualfugu db-host-1# /usr/cluster/bin/clnas show -v -d qualfugu === NAS Devices === Nas Device: qualfugu Type: sun_uss userid: osc_agent Project: qualfugu-1/local/oracle_db File System: /export/oracle_db/oradata File System: /export/oracle_db/oracle - Create mount points on both zone cluster nodes and add
/etc/vfstab
entries.
db-host-1# mkdir -p /u01/app/oracle /u02/oradata/ db-host-1# echo "qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3" >> /etc/vfstab db-host-1# echo "qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3" >> /etc/vfstab db-host-2# mkdir -p /u01/app/oracle /u02/oradata/ db-host-2# echo "qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3" >> /etc/vfstab db-host-2# echo "qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3" >> /etc/vfstab
- Create the
oracle
user on both nodes of the zone cluster. Make sure that the identity is the same as that of the source setup.
db-host-1# id -a oracle uid=602(oracle) gid=4051(oinstall) groups=4052(dba) db-host-2# id -a oracle uid=602(oracle) gid=4051(oinstall) groups=4052(dba)
- Mount the file systems and verify that the file systems are accessible by the
oracle
user on both nodes of zone cluster.
Mount the file system on both nodes of the zone cluster using the following command:
# mount /u01/app/oracle
Verify the owner, group, and mode of$ORACLE_HOME/bin/oracle
using the following command:
# ls -l $ORACLE_HOME/bin/oracle
Confirm that the owner, group, and mode are as follows:
- Owner:
oracle
- Group:
dba
- Mode:
-rwsr-s-x
- Owner:
- Create the required file system resource group and resources.
db-host-1# clresourcegroup create -S scal-mnt-rg db-host-1# clresource create -g scal-mnt-rg \ -t SUNW.ScalMountPoint \ -p MountPointDir=/u02/oradata/ \ -p FileSystemType =nas \ -p TargetFileSystem=qualfugu:/export/oracle_db/oradata \ oradata-rs db-host-1# clresource create -g scal-mnt-rg \ -t SUNW.ScalMountPoint \ -p MountPointDir=/u01/app/oracle \ -p FileSystemType=nas \ -p TargetFileSystem=qualfugu:/export/oracle_db/oracle \ oracle-rs
- Bring online the file system resources.
db-host-1# clresourcegroup online -eM scal-mnt-rg
- Verify that the file system is mounted properly on both nodes of the zone cluster.
db-host-1# clresource status -g scal-mnt-rg === Cluster Resources === oracle-rs db-host-1 Online Online db-host-2 Online Online oradata-rs db-host-1 Online Online db-host-2 Online Online db-host-1# mount -p |grep qualfugu qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,xattr, zone=cluster-1,sharezone=10 qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp, vers=3,xattr,zone=cluster-1,sharezone=10 db-host-2# mount -p |grep qualfugu qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,xattr, zone=cluster-1,sharezone=10 qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp, vers=3,xattr,zone=cluster-1,sharezone=10
Note: Often the database has a specific SRM project (entries in/etc/projects
) to be used for the database to obtain the correct level of system resources. For such configurations, these project specifications must be reproduced in thesolaris10
brand zone cluster as well. - Restore the contents of
/var/opt/oracle
on all nodes of zone cluster.
db-host-1# mkdir -p /var/opt/oracle/ db-host-2# mkdir -p /var/opt/oracle/ db-host-1# cp -rf /net/qualfugu/archive/optbkp/ /var/opt/oracle/ db-host-2# cp -rf /net/qualfugu/archive/optbkp/ /var/opt/oracle/
- Create the resource groups and resources required for HA for Oracle.
db-host-1# clresourcetype register SUNW.oracle_server db-host-1# clresourcetype register SUNW.oracle_listener db-host-1# clreslogicalhostname create -g oracle-rg \ -h db-lh db-lh-rs db-host-1# clresource create -g oracle-rg \ -t oracle_listener \ -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 \ -p Listener_name=LISTENER_DB1 \ -p Resource_dependencies_offline_restart=oracle-rs \ oracle-listener-rs db-host-1# clresource create -g oracle-rg \ -t oracle_server -p ORACLE_SID=testdb1 \ -p ALERT_LOG_FILE=/u02/oradata/admin/testdb1/bdump/alert_testdb1.log \ -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 \ -p CONNECT_STRING=hauser/hauser \ -p Resource_dependencies_offline_restart=oradata-rs,oracle-rs \ oracle-server-rs db-host-1# clresourcegroup online -eM oracle-rg db-host-1# clresource status -g oracle-rg === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-server-rs db-host-2 Offline Offline db-host-1 Online Online oracle-listener-rs db-host-2 Offline Offline db-host-1 Online Online db-lh-rs db-host-2 Offline Offline - LogicalHostname offline. db-host-1 Online Online - LogicalHostname online.
Verifying Failover
- Perform failover verification by halting one zone where the resources are online.
db-host-1# halt -q halt: can't turn off auditd [Connection to zone 'cluster-1' pts/2 closed]
- Verify that the status of the zone cluster on node
new-phys-host-1
is offline/installed.
new-phys-host-1:~# clzonecluster status cluster-1 === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Installed new-phys-host-2 db-host-2 Online Running
- Log in to the other node of the zone cluster and verify the status of the resource.
new-phys-host-2:~# zlogin cluster-1 [Connected to zone 'cluster-1' pts/2] Last login: Mon Nov 5 21:30:31 on pts/2 Oracle Corporation SunOS 5.10 Generic Patch January 2005 db-host-2# clresource status -g oracle-rg === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-server-rs db-host-2 Online Online db-host-1 Offline Offline oracle-listener-rs db-host-2 Online Online db-host-1 Offline Offline db-lh db-host-2 Online Online - LogicalHostname online db-host-1 Offline Offline - LogicalHostname offline
- Boot the zone cluster node that was halted in Step 1 and switch back the resource group:
Note that the following output is same as that obtained from the source setup shown in Figure 1.
new-phys-host-1:~# clzonecluster boot -n new-phys-host-1 cluster-1 new-phys-host-1:~# zlogin cluster-1 [Connected to zone 'cluster-1' pts/2] Last login: Mon Nov 5 21:30:31 on pts/2 Oracle Corporation SunOS 5.10 Generic Patch January 2005 db-host-1# clresourcegroup switch -n db-host-1 oracle-rg db-host-1# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ oracle-rg db-host-1 No Online db-host-2 No Offline scal-mnt-rg db-host-1 No Online db-host-2 No Online db-host-1# clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-server-rs db-host-1 Online Online db-host-2 Offline Offline oracle-listener-rs db-host-1 Online Online db-host-2 Offline Offline db-lh-rs db-host-1 Online Online - LogicalHostname online. db-host-2 Offline Offline - LogicalHostname offline. oracle-rs db-host-1 Online Online db-host-2 Online Online oradata-rs db-host-1 Online Online db-host-2 Online Online
solaris10
brand zone cluster is now complete.Conclusion
This article described how to migrate an Oracle database running on Oracle Solaris 10 in an Oracle Solaris Cluster environment to Oracle Solaris 11 without upgrading the database software by using Oracle Solaris 10 Zones and Oracle Solaris zone cluster features.This migration is an example that can be extended to other application environments and shows how Oracle Solaris can help preserve infrastructure investments and lower migration costs.
No comments: