For all the new features and enhancements to Windows Server 2012 Hyper-V, the virtualization platform still has its shortcomings.
Having worked with Hyper-V since the early betas, I appreciate the speed at which Microsoft adds new features. With each Hyper-V release, however, I still see a few areas of improvement that could add key functionality and ease administration. Below, I look at three of these shortcomings in Windows Server 2012 Hyper-V and offer some practical fixes.
1. Poor Quick Migration between hosts with different processor architectures
It’s possible to migrate virtual machines (VMs) between Windows Server 2012 Hyper-V hosts with different processor types, as long as they are within the same family (e.g., Intel to Intel or AMD to AMD). But it’s not possible to move VMs between processor vendors without first shutting down the VM and moving it to the alternate host. The problem is, if a VM has a large Virtual Hard Disk, the amount of downtime can be lengthy, as it’s moved to the new Windows Server 2012 Hyper-V host.
How should it work?
For Windows workloads, I understand there needs to be some recognition of the new processor architecture, so a reboot or two may be necessary. But the method below would potentially cut the downtime from hours to minutes.
- I foresee Microsoft using the same method of Live Migration between Windows Server 2012 Hyper-V hosts or a similar process to that of Quick Storage Migration in Windows Server 2008 R2 SP1, which takes a Hyper-V VSS Writer snapshot of the VM and moves the data to the alternate host, while the source VM is still running.
- Once the data is moved, the new VM restarts offline, allowing for the VM to recognize the processor architecture, then shuts down.
- Next, the source VM shuts down, and a final Hyper-V VSS Writer snapshot is taken. Any remaining changes are transitioned over, and the VM on the destination host is restarted and brought online.
The process is very similar to a physical-to-virtual migration, where you keep an Intel or AMD architecture host online and migrate it to a virtual host with a different processor.
Benefits of this method
This process would be good for migrating large VMs between hosts, because large VHD/VHDX files can take a long time to move. It would also benefit low-bandwidth environments, because the source VM would stay online during the migration.
2. Inadequate Live/Quick Migration of VMs between different Hyper-V versions
Hyper-V’s rapidly evolving features have kept up with the needs of modern application resource requirements. But this fast pace also creates a constant need to migrate hosts and VMs to the latest Hyper-V version.
In most cases, this process requires a shutdown of the VMs. Then, you must migrate them offline to the updated host and install the new Integration Components.
How should it work?
Similarly to the above scenario, migrating VMs between Hyper-V versions probably necessitates a reboot to install new Integration Components. But extended downtime should not be part of the process. Below is how I see it working in the future.
- Hyper-V performs a Hyper-V VSS Writer-assisted snapshot and moves the data from the source VM, while it is still running, until all data is moved to the new host;
- It then automatically starts the new destination VM offline (the virtual NIC is not connected) and installs Integration Components; then
- Shuts down the source VM once the first pass of data is moved to the destination host;
- Moves the remaining changes from the source VM to the destination VM, so both virtual machines are synchronized; and finally
- Restarts the destination VM.
Benefits of this method
The source VM remains online until all data is moved to the new host. At the same time, this method reduces downtime and any strain that causes.
3. Imperfect hot-add memory allocation with a running VM
It should come to no surprise that, over the years, server workloads have consumed more resources, including disk space, CPU and RAM. Microsoft added a hot-add disk capability to Hyper-V in Windows Server 2008 R2, and hot-add memory is now supported in Windows Server 2012 Hyper-V, as long as the VM utilizes Dynamic Memory. But it is still not possible to hot-add memory if the VM uses a statically assigned or fixed-memory allocation.
By only enabling hot-add memory for VMs that utilize Dynamic Memory, Microsoft misses workloads that would most likely need an uninterrupted memory increase such as databases or even Java-based applications. The latter are not recommended for use with or supported by Dynamic Memory, because they use memory resources fully and cannot call for more memory from the dynamic range set for the VM.
Workloads that are memory hungry are usually configured with a fixed amount, and they are just the ones that could use more memory on the fly, without interruption. For those reasons, I still think the addition of hot-add memory is incomplete and only partly reduces the administrative burden for IT pros.
How should it work?
Using the capabilities already in Windows Server, Microsoft should include the ability to adjust the memory maximums for fixed-memory allocations.
The results of this fix
This capability would eliminate another manual process that incurs downtime. It would also allow for more automated remediation processes (with the use of System Center Operations Manager), which would accommodate the periodic need for memory-hungry workloads to receive additional resources. Finally, it would meet the needs of workloads with unpredictable memory requirements, such as databases, that are the most sensitive to downtime.
Don’t get me wrong: Windows Server 2012 Hyper-V includes some excellent additions. But, with every release, there are bound to be a few features that miss the cut. The options above would dramatically reduce migration and administrative time, but they are unfortunately not available…yet.
No comments:
Post a Comment