
This article describes the steps you can perform to
migrate from Oracle Solaris 10 to Oracle Solaris 11 11/11 with minimal
downtime. Note, that this procedure will not work for Oracle Solaris 11.1 or
later. Administrators are expected to use the integrated software
management tools to update systems once migrated to a later release.
First, you create a set of ZFS send archives—golden image—on
an Oracle Solaris 11 11/11 system that is the same model as your Oracle
Solaris 10 system. Then you install this golden image on an unused
disk of the system running Oracle Solaris 10 to enable it to be rebooted
into Oracle Solaris 11 11/11. The basic system configuration
parameters from the Oracle Solaris 10 image are stored and applied to
the Oracle Solaris 11 11/11 image.
Note:
Migrating the installed software to a system of a different model is not
supported. For example, an image created on a SPARC M-Series system
from Oracle cannot be deployed on a SPARC T-Series system from Oracle.
Also, at this time, this procedure applies only to migrating to Oracle
Solaris 11 11/11, not to other releases of Oracle Solaris 11.
Overview of the Process and Requirements
This live install procedure has the following four phases:- Phase 1: Creating the Oracle Solaris 11 11/11 Archive
- Phase 2: Preparing to Configure the Oracle Solaris 11 11/11 System
- Phase 3: Migrating the Oracle Solaris 11 11/11 Archive
- Phase 4: Configuring the Oracle Solaris 11 11/11 System
This article refers to two systems:
- The archive system is a system on which an Oracle Solaris 11 11/11 archive is created.
- The migration system is a system that is currently running Oracle Solaris 10 and is being migrated to Oracle Solaris 11 11/11.
A ZFS archive is created for the root pool and its associated data sets from a freshly installed Oracle Solaris 11 11/11 system (the archive system). When the archive is created, it may be saved on local removable media, such as a USB drive, or sent across the network to a file server from which it can later be retrieved. When it is time to make use of the archive, you perform the following high-level steps:
- You start a superuser-privileged shell on the Oracle Solaris 10 system that is to be migrated to Oracle Solaris 11 11/11 (the migration system).
- You select and configure a boot disk device and you create the new ZFS root pool.
- You restore the archived ZFS data sets in the new pool.
- You perform the final configuration and then reboot the migration system.
- The archive system and the migration system are the same model (for example, SPARC T-Series) and they meet the Oracle Solaris 11 11/11 minimum requirements.
- The migration system is running Oracle Solaris 10 8/11 or later, which is necessary in order to have a version of ZFS that is compatible with Oracle Solaris 11 11/11.
- If the migration system is running
Oracle Solaris 10 8/11, apply the following ZFS patch before attempting
to restore the archive. Without this patch, any attempt to restore the
archive will fail. The patch is not necessary with any later release
of Oracle Solaris 10.
- Patch 147440-11 or later for SPARC-based systems
- Patch 147441-11 or later for x86-based systems
Note: The migration system must be rebooted after applying the patch. - Ensure that the disks that will house the new ZFS pool are at least as large in total capacity as the space allocated in the archived pools. This is discussed in more detail in the Preparation section.
solaris10
branded zones using
separate procedures that are outside the scope of this document. Also
note that it is not possible to run Oracle Solaris 8 or Oracle Solaris 9
zones on an Oracle Solaris 11 system.The archive that is created will not have the desired system configuration since will be created on a different host than the host on which it will eventually be run. Configuration of the archive (after migration) is covered in Phase 4. It will be necessary to reconfigure each boot environment in the archive after the migration is complete and before Oracle Solaris 11 11/11 is booted. For this reason, the archive should contain only a single boot environment (BE).
No hardware-specific configuration data is carried in the archive image. Hardware-specific system characteristics that will not transfer with the backup include, but are not limited to, the following:
- Disk capacity and configuration (including ZFS pool configurations)
- Hardware Ethernet address
- Installed hardware peripherals
Phase 1: Creating the Oracle Solaris 11 11/11 Archive
Figure 1 depicts what happens when you create the Oracle Solaris 11 11/11 archive.Figure 1. Creating the Oracle Solaris 11 11/11 Archive
Preparation
To prepare for migration, note the disk topology and ZFS pool configuration for the root pool on the migration system. Configure the target disk on the migration system similarly to the disks on the archive system, and size the new ZFS pool appropriately. At a minimum, the allocated amount of the pool (theALLOC
column in the zpool list
output shown below) is required to ensure there is enough room to restore the data sets on the migrating system.# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 68G 51.6G 16.4G 75% 1.00x ONLINE -If any archival pool's capacity (as shown by the
CAP
column) exceeds 80%, best practices dictate that the migration pool
should be grown to plan for capacity. Increasing the headroom in the
pool can also be beneficial to performance, depending upon other
configuration elements and the workload.To prepare for later migration, save the output from various commands to a file that is kept with the archive for reference during migration. Listing 1 shows the commands that are recommended as a bare minimum, but other configuration information might be useful, depending upon the system configuration. The commands shown in Listing 1 with example output are for
rpool
only.# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 68G 51.6G 16.4G 75% 1.00 ONLINE - # zpool get all rpool NAME PROPERTY VALUE SOURCE rpool size 68G - rpool capacity 75% - rpool altroot - default rpool health ONLINE - rpool guid 18397928369184079239 - rpool version 33 default rpool bootfs rpool/ROOT/snv_175a local rpool delegation on default rpool autoreplace off default rpool cachefile - default rpool failmode wait default rpool listsnapshots off default rpool autoexpand off default rpool dedupditto 0 default rpool dedupratio 1.00x - rpool free 16.4G - rpool allocated 51.6G - rpool readonly off - # zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 errors: No known data errors # format c5t0d0s0 selecting c5t0d0s0 [disk formatted] /dev/dsk/c5t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M). FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show disk ID volname - set 8-character volume name !Listing 1. Output from Various Commands- execute , then return quit format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk ! - execute , then return quit partition> p Current partition table (original): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 14086 68.35GB (14086/0/0) 143339136 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 partition> ^D #
Place the information shown in Listing 1 from the system being archived, along with anything else that might be useful during migration, in a file, and store the file in the same location as the archive files for use later during the migration.
Alternatively, you can use the Oracle Explorer Data Collector to gather all system configuration information for later reference.
Archive Creation
To archive the root pool and include all snapshots, you need to create a ZFS replication stream. To do this, you first create a recursive snapshot from the top level of the pool, as described below. In the same manner, you can archive other pools that need to be archived and carried over to a migrated host.Note that
rpool
is the default root pool name, but the root pool might be named differently on any given system. Use beadm list -d
to determine on which pool the BE resides. In the remainder of this article, the default name rpool
is used to reference the root pool.Use the following command to create a recursive snapshot of the root pool. The snapshot name (
archive
, in this example) can be based upon the date or whatever descriptive labels you desire.# zfs snapshot -r rpool@archive
Next, delete the swap and dump device snapshots because they likely do not contain any relevant data, and deleting them typically reduces the size of the archive significantly.
Note: Regarding the dump device, it is possible, though unlikely, that the dump device has data that has not yet been extracted to the
/var
data set (in the form of a core archive). If this is the case, and the
contents of the dump device should be preserved, dump the contents out
to the file system prior to deleting the dump device snapshot. The following commands delete the default-named swap and dump device snapshots, though there might be more deployed on a host.
# zfs destroy rpool/swap@archive # zfs destroy rpool/dump@archiveNow that the snapshot has been prepared, the next step is to send it to a file for archival. If you are archiving more than one ZFS pool, each pool will have a snapshot, and each snapshot needs to be sent to its own archive file. The following steps focus on creating the archive for the root pool. However, you can archive any other pools on the system in the same manner.
To send the snapshot to a file, you pipe the
zfs send
command into a gzip
command, as shown below, which results in a compressed file that
contains the pool snapshot archive. When creating this archive file, it
is a good idea to use a unique naming scheme that reflects the host
name, the date, or other descriptive terms that will be useful in
determining the contents of the archive at a later date.You can save the archive file locally for later relocation or you can create it on removable media. The location where you store the archive file should be a file system that is backed up regularly. Also, although compression is used, enough storage space should be available on the file system. A good rule of thumb is to have enough capacity for the sum of the
ALLOC
amounts reported by zpool list
.Use the following command to create the archive file locally. The archive file name can be any string that helps identify this archive for later use. A common choice might be using the host name plus the date, as shown in the following example.
# zfs send -Rv rpool@archive | gzip > /path/to/archive_$(hostname)_$(date +%Y%m%d).zfs.gzNow, move the archive file to a file server for later retrieval, as shown in Figure 2.
Figure 2. Ensuring Accessibility of the Oracle Solaris 11 11/11 Archive
Optionally, you can write the archive file directly to an NFS-mounted path, as shown below:
# zfs send -Rv rpool@archive | gzip > /net/FILESERVER/path/to/archive_$(hostname)_$(date +%Y%m%d).zfs.gz
Similarly, you can stream the archive file to a file server via
ssh
:# zfs send -Rv rpool@archive | gzip | ssh USER@FILESEVER "cat> /path/to/archive_$(hostname)_$(date +%Y%m%d).zfs.gz"
Note that if you stream the archive across the network, the
ssh
transfer does not support any sort of suspend and resume functionality.
Therefore, if the network connection is interrupted, you will need to
restart the entire command.Now that the migration archive file has been created, destroy the local snapshots using the following command:
# zfs destroy -r rpool@archive
Phase 2: Preparing to Configure the Oracle Solaris 11 11/11 System
Before you boot the migrated Oracle Solaris 11 11/11 instance, prepare for the migration by gathering all the relevant system configuration parameters from the migration system that is running Oracle Solaris 10. The system configuration items you need to gather include, but are not limited to, the following:- Host name
- Time zone
- Locale
- Root password
- Administrative user information
- Primary network interface (if it is not auto-configured)
- Name service information
The
create-profile
subcommand of sysconfig
invokes the SCI Tool interface, queries you for the system
configuration information, and then generates an SC profile you can use
later to configure the system.Use the following command to create an SC profile locally. The profile name can be any string that helps identify the profile for later use. The following example uses
config
with date information appended.# sysconfig create-profile -o /path/to/config_$(date +%Y%m%d).xml
Then move the SC profile to a file server for later retrieval.
Optionally, you can create the SC profile and write it directly to an NFS-mounted path, as shown below.
# sysconfig create-profile -o /net/FILESERVER/path/to/config_$(date +%Y%m%d).xml
Phase 3: Migrating the Oracle Solaris 11 11/11 Archive
Figure 3 depicts what happens when you migrate the Oracle Solaris 11 11/11 archive.Figure 3. Migrating the Oracle Solaris 11 11/11 Archive
Boot Device and Root Pool Preparation
The first step is to configure the new boot disk device.As previously mentioned, you can replicate the original disk layout or you can use a different layout as long as the following steps are taken and space at the beginning of the disk is reserved for boot data. The root pool does not need to be the same size as the original. However, the new pools must be large enough to contain all the data in the respective archive file (for example, as large as the
ALLOC
section in the zpool list
output, as described previously).Decide how to configure the boot device based upon the initial disk configuration on the archive system. To reiterate, what is required is that ultimately the ZFS pools you create are large enough to store the archive data sets described by the
ALLOC
amounts in the output of zpool list
.Use the
format
(1M)
command to configure the disk partitions and/or slices, as desired. For
boot devices, a VTOC label should be used, and the default
configuration is a full-device slice 0 starting at cylinder 1. The files
that were saved as part of the archive creation can provide guidance on
how to best set up the boot device.The example in Listing 2 shows how to select the desired boot device from the
format
utility's menu.# format Searching for disks...done c3t3d0: configured with capacity of 68.35GB AVAILABLE DISK SELECTIONS: 0. c3t2d0Listing 2. Selecting the Boot Disk/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@2,0 1. c3t3d0 /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@3,0 Specify disk (enter its number): 1 selecting c3t3d0 [disk formatted]
On an x86 system, if you see the message
No Solaris fdisk partition found
, then you need to create an fdisk
partition:format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format>
Now configure the slices as needed. Listing 3 is an example of setting up a full-capacity slice 0, which is the default configuration. The slice starts at cylinder 1 to leave room for boot software at the beginning of the disk. Note that the partition table might look different based upon your system architecture, disk geometry, and other variables.
partition> print format> partition Current partition table (default): Total disk cylinders available: 8921 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition> 0 Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: root Enter partition permission flags[wm]: Enter new starting cyl[1]: 1 Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: $ partition>Listing 3. Setting Up a Full-Capacity Slice 0
Once the slices are configured as needed, label the disk, as shown in Listing 4. Confirm the overall layout prior to moving on to the next step.
partition> label Ready to label disk, continue? y partition> print Current partition table (unnamed): Total disk cylinders available: 8921 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 8920 68.34GB (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition> ^DListing 4. Labeling the Disk
ZFS Pool Creation and Archive Restoration
Now that you have configured the disk, create the new root pool on slice 0 using the following command:# zpool create rpool cXtXdXs0
Note that if the archive system's root pool did not use the default name,
rpool
, use its name instead of rpool
.
The migration procedure is able to complete successfully when a
different pool named is used, but the resulting ZFS file system will
have a different mount point.The next step is to restore the ZFS data sets from the archive file. If the archive is stored on removable media, attach and configure that media now so that the file can be accessed.
Once the file is accessible locally, restore the data sets using the following command:
# gzcat /path/to/archive_myhost_20111011.zfs.gz | zfs receive -vF rpool
Alternatively, if the files are stored on a networked file server, you can use the following command to stream the archive file and restore the data sets.
# ssh USER@FILESERVER "cat /path/to/archive_myhost_20111011.zfs.gz" | gzip -d | zfs receive -vF rpoolNote: The
receive
command might generate error messages of the following form: cannot receive $share2 property on rpool: invalid property value
. This is expected and will not affect the operation of the restored data sets.If other pools were archived for restoration on this host, you can restore them at this point using the same ZFS operation shown above.
he data migration portion of the procedure is now complete. Some final steps must be performed now to ensure that the migration system will boot as expected.
Hardware Configuration and Test
Next, you need to create swap and dump devices for use with the migration system. Note that the default-named devices are being used in this article. Therefore, no further administrative tasks are required (for example, adding the swap device usingswap
(1M)), since the
devices were already in use and are configured to run with this system
at boot time. If the migration system has a memory configuration that
varies from the system that was archived, the swap and dump devices
might require a different size, but the names are still the same as in
the previous configuration and, thus, they will be configured properly
on the first boot of the migration system.The swap and dump devices should be sized according to the advice in the Oracle Solaris Administration: Devices and File Systems and Oracle Solaris Administration: ZFS File Systems guides, which is roughly as shown in Table 1.
Table 1. Swap and Dump Device Sizes
Physical Memory | Swap Size | Dump Size |
---|---|---|
System with up to 4 GB of physical memory | 1 GB | 2 GB |
Mid-range server with 4 GB to 8 GB of physical memory | 2 GB | 4 GB |
High-end server with 16 GB to 32 GB of physical memory | 4 GB | 8 GB+ |
System with more than 32 GB of physical memory | 1/4 total memory size | 1/2 total memory size |
You can determine the amount of physical memory as follows:
$ prtconf |grep Memory Memory size: 130560 MegabytesNote that once the system is booted, you can add additional swap devices if needed.
Use the following commands to recreate swap and dump devices with appropriate capacities. Note that in this example, the migration system has 8 GB of memory installed.
# zfs create -b 128k -V 2GB rpool/swap # zfs set primarycache=metadata rpool/swap # zfs create -b 128k -V 4GB rpool/dump
The BE that is to be activated needs to be mounted now so that it can be accessed and modified in the following steps. To identify the BE to mount, use the
zfs list
command:# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.32G 443G 31K legacy rpool/ROOT/solaris_11 3.32G 443G 3.02G / rpool/ROOT/solaris_11/var 226M 443G 220M /var
BEs are located in the root pool in the
rpool/ROOT
data set. Each BE has at least two entries: the root data set and a /var
data set. The BE in the example above is solaris_11
.The BE that will be active when the system reboots needs to be identified by setting the appropriate property on the root pool. To do this, use the
zpool
command:# zpool set bootfs=rpool/ROOT/solaris_11 rpool
To mount the active BE data set, it is first necessary to change the mount point. Change the mount point and then mount the active BE data set using the following commands:
# zfs set mountpoint=/tmp/mnt rpool/ROOT/solaris_11 # zfs mount rpool/ROOT/solaris_11
The BE's root file system can now be accessed via the
/tmp/mnt
mount point. The first step is to install the boot software that will
allow the host to boot the new root pool. The steps are different
depending upon architecture, as shown below. Both examples use the /tmp/mnt
BE mount point.- To install the boot software on an x86-based host, use this command:
# installgrub /tmp/mnt/boot/grub/stage1 /tmp/mnt/boot/grub/stage2 /dev/rdsk/cXtXdXs0
- To install the boot software on a SPARC-based host, use this command:
# installboot -F zfs /tmp/mnt/usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/cXtXdXs0
# devfsadm -Cn -r /tmp/mnt
Next, you need to direct the system to perform a reconfiguration boot on first boot, which will configure any new device hardware (as related to the archive system versus the migration system). To force a reconfiguration boot, you place a file named reconfigure at the top level of the BE's root file system. This action is not persistent, because the file is removed and, thus, the reconfiguration occurs only on the first boot after the file is placed.
Use the following command to set up a reconfiguration boot by creating the reconfigure file in the active BE's mounted file system:
# touch /tmp/mnt/reconfigure
If you are doing a live install on an x86 machine, the
hostid
file needs to be regenerated. If the file doesn't exist at boot time, it will be generated, so delete the file, as follows:# rm /tmp/mnt/etc/hostid
Phase 4: Configuring the Oracle Solaris 11 11/11 System
The SC profile created in Phase 2 will now be applied to the migration system. If an SC profile already exists on that system, remove it using the following command:# rm /tmp/mnt/etc/svc/profile/site/profile*.xml
Next, two Oracle Solaris Service Management Facility profiles that are included in the Appendix (
/tmp/disable_sci.xml
and unconfig.xml
) need to be copied to /tmp/mnt/etc/svc/profile/site
. These profiles will cause the system to do an unconfigure before applying the SC profile generated earlier.Create
/tmp/disable_sci.xml
and unconfig.xml
by copying the XML information from the Appendix:Now, copy the generated SC profile to the appropriate location, which is
/tmp/mnt/etc/svc/profile/sc
. This directory might not exist, so it might be necessary to create it.# cp /tmp/disable_sci.xml /tmp/mnt/etc/svc/profile/site # cp /tmp/unconfig.xml /tmp/mnt/etc/svc/profile/site # cp /path/to/config_20111011.xml /tmp/mnt/etc/svc/profile/sc/Next, unmount the BE and reset the mount point:
# zfs umount rpool/ROOT/solaris_11 # zfs set mountpoint=/ rpool/ROOT/solaris_11
Then reboot the migration system.
As depicted in Figure 4, the migration system should now be as the archive system was, barring any changes in the system configuration, physical topology, or peripheral devices, or any other hardware-related changes.
Figure 4. Rebooting from the New Boot Disk
Appendix
Listing 5 shows the contents of thedisable_sci.xml
profile.Listing 5. Contents of the
disable_sci.xml
ProfileListing 6 shows the contents of the
unconfig.xml
profile.Listing 6. Contents of the
unconfig.xml
Profile
No comments: