This series has talked at very great length. Now it’s time to actually get something done. What I’m going to do in this article is show you how to connect to storage. How you connect depends on entirely on how the storage is presented. You can use either PowerShell or the GUI. Since this post is likely to be long anyway, I’ve decided to show only GUI methods.
Notice: All of the screenshots and procedures in this post are from 2012 R2. 2012 shouldn’t be dramatically different, but I did not compare this to a 2012 system.
Part 1 – Hyper-V storage fundamentals
Part 2 – Drive Combinations
Part 3 – Connectivity
Part 4 – Formatting and file systems
Part 5 – Practical Storage Designs
Part 6 – How To Connect Storage
Part 7 – Actual Storage Performance
In this section, I intend to show you how to connect to and use local disks. However, once you’ve connected to a LUN on Fibre Channel or iSCSI storage, it is treated the same way.
You have the option of using the Disk Management MMC console, but this is the 2012 era, and Server Manager is the current tool. Server Manager can’t control CD/DVD drives or MPIO policies (unless I missed something) but it can do everything else.
Two machines are running Hyper-V Server. The third is running a GUI version of Windows Server and hosts the storage for the cluster. In the following screenshot, I’m looking at the Disks tab on the File and Storage Services section of Server Manager on that Windows Server.
It is also connected to both of the nodes (using item 2, Add other servers to manage on the welcome screen of Server Manager). Server Manager also comes in the RSAT tools for Windows desktops. You must use Windows 8.1 to manage 2012 R2 servers.
As you might have expected, I’ve modified the builds somewhat from the document. SVHV1 is still exactly as described in the document. In SVHV2, I’ve purchased another 250GB drive and converted it to a RAID 1 using the BIOS (I’ve got another for SVHV1 as well, I just haven’t gotten around to rebuilding that system yet). SVSTORE has the drives from the document, but I’ve also picked up a pair of 3 TB drives (the 2TB drive didn’t make it into the screenshot).
The 3TB drives are completely fresh, with no data. I want them to be in a mirror for the virtual machines. This will give them superior read performance and, more importantly, protect them from drive failure. If this system were intended to host virtual machines locally, I’d want this to be in a hardware RAID-1. That’s because any software mirroring takes processing power, and you really shouldn’t be stealing CPU time from your VMs. Unfortunately, I learned that the N40L’s RAID processor just can’t handle drives of this size. Apparently, I can buy an add-in card, but I just don’t want to deal with it. So, I’m going to use Storage Spaces to create a mirror. Before I get into that, I’ll show you how to set up a single disk.
Prepare a Local Disk for Usage
This section will use a local Storage Spaces virtual disk for its example. These are example the same steps you’d use for a single internal disk, another type of virtual disk (provided by an internal hardware RAID system), or an iSCSI or Fibre Channel disk.
I’m only going to show you one way to do this. There are quite a few others, even in the GUI. The way most of us are used to doing it is through Disk Management, and that process has not changed. I’m going to use Server Manager, because this is the new stuff.
In the File and Storage Services section of Server Manager, go to the Disks sub-section under Volumes. In the Disk list at the top, find your new disk. If it’s freshly installed, it’s probably Offline. If it’s online, it may show an Unknown partition.
To get started, right-click on the disk to work with and click Bring Online. You’ll get a warning message that lets you know that onlining a disk that is currently being used by another server might cause data loss. Acknowledge the message (I’m assuming, for this section, that the disk isn’t being used by another server). From this point, you’d traditionally Initialize the disk, then create a volume. With Server Manager, you can now get it all done in a single wizard. Right-click on your freshly onlined disk and choose New Volume. This will kick off the New Volume wizard. The first screen is informational, so click Next.
The second screen has you choose the server and disk to work with. Since you specifically started by right-clicking on a disk, you should already have the correct disk selected:
Once you click Next, you’ll get a pop-up dialog. This occurs because we did not take the Initialize step. It’s just telling us that it’s going to initialize the disk as a GPT. This shouldn’t be a problem, since our system is already bootable and Windows Server 2012+ has no issues with GPT. If you prefer the MBR method for any reason, you’ll have to use something other than Server Manager to initialize the disk. Click OK to proceed or Cancel to find another way.
The next few screens are modernized versions of Disk Management’s screens: capacity of the new volume, drive letter/mount selection, and format/label/allocation unit specification.
Depending on the options you have available/installed, the next screen will be for Deduplication. I don’t want to spend a lot of time on this, but the short introduction is that for Hyper-V, this is most appropriate for VDI. You can use it for hosting servers, but you’re sort of on your own if performance suffers or you don’t get the space savings boost you expect. Remember that Microsoft’s deduplication does not occur inline in real time. It occurs on a set schedule.
After this, it’s the Confirmation and Results screens, where you can watch your disk get created. Unlike Disk Management, Server Manager’s wizard only performs quick formats, so this shouldn’t take long regardless of disk size. Once it’s done, your disk is ready to use.
Prepare a Storage Spaces Volume for Usage
The disk(s) that you want to use can be online or offline (right-click option on the Disks tab), but they must have at least some space that’s not already claimed by a volume or partition. For this demo, I’m going to use completely clean disks. To get started, go to the Storage Pools section in File and Storage Services. In the lower right, under Physical Disks, make sure that the disks you want to use appear. You can use the Tasks menu on the Storage Pools section at the top to Refresh or Rescan Disks if some are missing (why these aren’t on the Physical Disks Task menu is beyond me). To get started, open either of these menus and click New Storage Pool.
On the first screen of this wizard (not shown), you give the new Storage Space a name and, optionally, a description. I just called mine “SVSTORE Space”, because I’m not very creative. Click Next when you’re happy with the name (or at least accepting of it).
On the second screen, you select the disks you want to be part of the new Storage Space. On the right, each disks has a drop-down for Automatic, Manual, or Hot Spare. Automatic allows Storage Spaces to figure out the best way to use the disks. I haven’t spent any real time researching what Manual does, but using it allowed me to specify the interleave size during the creation of a virtual disk (that part comes later) . Hot Spare does just that, makes the disk available as a hot spare. If you’re not familiar with this term, a hot spare disk sits empty until an array disk fails.
The data from that disk is copied to the hot spare and it then comes online in place of the failed disk. I’ve never seen this used in a two-disk system and not sure how it would work, perhaps active/passive. Usually, hot spares are used in a parity system or as a +1 for mirrors or RAID-10 configurations. I selected Automatic for both my drives. Click Next once you’ve set the selections as desired.
Your hard work will be rewarded with a summary screen. If you click it, click Create. If you don’t, click Back or Cancel. These directions will assume you went with Create. In that case, you’ll get a screen with a few progress bars. Once they’re done, hopefully none of them turn red. These directions will also assume they stayed blue. Once the process completes, you can Close. Before you do that, you might want to check the box for Create a virtual disk when this box closes. If you don’t, then you’ll have to right-click on the new storage pool you just created and select Create Virtual Disk…, which is an entire extra click.
The first two screens of the virtual disk wizard are pretty easy. The first is just explanatory welcome text and the second has you choose your pool. My box only has one pool so it didn’t take me long to decide. On the third screen, you have to give your new virtual disk a name and, optionally, a description. I just called mine “Mirror”, because, well, there’s definitely a reason I write technical material and not popular fiction. You’ll notice there’s a checkbox to tier the storage. Mine is grayed because I have only spinning disks; you need both spins and solids in the same batch for tiering to work. Click Next when this page is good enough.
It’s on the fourth screen that you get your first real choice: Simple, Mirror, and Parity. These are effectively RAID-0, RAID-1, and RAID-5/6, respectively. As I understand it, there are some differences between these and the industry-standard versions, but I don’t know what all of them are. I know that Storage Spaces mirroring has the ability to create a three-way copy if you add a third disk.
You can read part 2 of this series if you need a refresher on the various RAID types.
The next screen has you choose between thick or fixed (thin) provisioning. I have done plenty of virtualization on thin-provisioned storage in the past, including thin-provisioned virtual disks (VHDs) on thin-provisioned SAN LUNs. The upside is that things like volume creation can be a lot faster and snapshots can be a lot smaller (on actual SANs, I don’t know about SS).
The downside is fragmentation and other performance hits when the space is expanded. In larger storage systems with many spindles, the cost of these operations is usually lost in other latencies and is irrelevant to all but the most demanding systems (or neurotic administrators). In a two-spindle system, the hit is more noticeable… although the load is usually a lot lighter, so it still probably doesn’t really matter. But, I’ll never use this space for anything else, so I don’t really have a great reason to use thin provisioning.
The final screen you can make changes on is the size screen. I’m just going to use it all. I’ve heard rumors that having a single VM on a CSV and keeping both owned by the same node improves performance, but I’ve never seen a published benchmark that corroborates that. If that’s the sort of thing you like to do, then feel free to split your Spaces up. You might also want to have separate Spaces so you can have some for Hyper-V storage and others for regular file shares or SOFS storage.