Citrix Provisioning Services (PVS) 7.6 vs. 7.7, VHD vs. VHDX and Scale-Out File Server Update

Share Button

A couple years ago I was troubleshooting through a Versioning and Merge VHD issue with Provisioning Services (PVS) 7.1 and Windows Scale-Out File Server (SOFS), which resulted in my recommendation to create a Maintenance store for vDisk Versioning. I am happy to see in the last two years this has completely improved and dare I say is fully resolved, specifically with the recent release of PVS 7.7. In this blog post, I’m going to share a little bit of testing data from my experience with PVS 7.7, along with a couple recommendations. Let’s get started!

A couple notes about PVS 7.7 before we begin. If you take a look at What’s New in PVS 7.7, you’ll see a couple highlights that I’ll cover in this blog post:

  • In-place upgrade of target device software. Rather than reverse-imaging, you can install a new version of the target device software without having to manually uninstall the previous version. To do an in-place upgrade from version 7.6 to version 7.7 you must first install version 7.6.1.
  • Streaming VHDX formatted disks. This feature adds flexibility and efficiency to image and merging operations by letting you stream VHDX files as well as VHD files. Provisioning Services recognizes and uses the file format .vhdx as the extension for base disks and .avhdx for differencing disks (also known as versions). No configuration of this feature is necessary. You perform all image manipulation functions, such as deleting or merging vDisks, or creating new versions, in the Provisioning Services console the same way for both formats.

The in-place upgrading feature is fantastic, and I wanted to call that out before moving on. I won’t be showing any test data from that feature, but my experience with it so far has been fantastic!

For performance testing and results, I’m going to be sharing PVS 7.6 vs. 7.7 and VHD vs. VHDX during image capturing and Versioning merging. The other capability that should be called out as a specific feature of what’s new in PVS 7.7 is the Target Image Creation Wizard and new option to create an image file. This feature allows you to capture directly to a local disk or file share to import directly into PVS. This is actually pretty huge from an admin experience perspective, so I’m not sure why the PVS team didn’t feel this should be called out.

The following is an updated visual of my testing environment. If you compare to my previous blog post, you’ll see I’ve removed the Maintenance Store requirement as this is no longer needed for PVS 7.7 when using with Scale-Out File Server. You’ll also see a couple shadowed SOFS and PVS servers showing the linear scale-out performance for these two components in the architecture:

A couple details about my testing environment:

  • Flash volume presented as iSCSI CSV to my SOFS Failover Cluster (high performance)
  • Windows Server 2012 R2 for all server components (Two SOFS VMs, two PVS 7.7 VMs)
  • 2 vCPU / 4GB RAM for all server components (including PVS to remove possibility of heavy NTFS caching)
  • Windows 7 64-bit vanilla gold image (non-optimized) Master VM for capturing. Windows installed, but no applications or updates. Roughly 20GB thin provisioned size.
  • 4 vCPU / 8GB RAM for all targets for capturing vDisks
  • Continuous Availability disabled for the SOFS Share
  • PVS 7.7 GA binaries on servers/targets for any 7.7 tests
  • PVS 7.6 GA binaries on servers/targets for any 7.6 tests (no hotfixes or updates)
  • All virtual machines are on VMware vSphere 5.5 with VMware Tools installed (using VMxNet3 network adapters)

First, I’ll start by running a simple file copy operation from the PVS server to the Scale-Out File Services share to establish a baseline for performance:

You can see from the baseline, I’m getting 255 MB/s of throughput to the SMB share (2+Gb/s). This is really good performance, so suffice to say there is no storage bottleneck for the SMB share.

Also, if using SOFS for PVS, I’d highly recommend unchecking the Continuous Availability option on the share. I’ve run into issues with vDisk corruption when Continuous Availability was enabled:

For the first test I want to show a quick comparison of PVS 7.6 VHD (17m10s) vs. PVS 7.7 VHD (19m10s) capture time using the VHD format and identical process (reboot and Create a vDisk through PVS server/client network interface):

As you can see from the visual above, PVS 7.6 was slightly faster than 7.7 in a capture to VHD test (roughly two minutes faster).

For the PVS 7.7 Target, I selected the following options to capture a VHD. Selected “Create a vDisk” which connects to the PVS server and streams the C:\ drive through the PVS server to create the vDisk (legacy scenario). You’ll also notice the new option, Create an image file. We’ll come back to that.

For PVS 7.7 I selected Dynamic vDisk, VHD, and 2MB (same block size for PVS 7.6):

Looking at some of the vSphere VM performance data, the PVS 7.6 Target was typically hovering at 20-25MB/s with peaks up to 31MB/s:

Looking at some of the vSphere VM performance data, the PVS 7.7 Target was typically hovering at 15-18MB/s with peaks up to 22MB/s (with very interesting peaks and plateaus):

Based on this network performance data, it makes sense why PVS 7.6 was slightly faster for capture. I didn’t re-run this test multiple times to see if the results were similar after multiple runs, this is just a single data point.

Next, we’ll take a look at the PVS 7.6 VHD (17m10s) vs. PVS 7.7 VHD (19m10s) vs. PVS 7.7 VHDX (11m37s) capture times. This test also used the Create a vDisk process through the PVS server/client network interface:

Below is a screenshot showing the VHDX option in the capture wizard (note there’s no longer a configurable option for block size, uses 32MB by default):

Looking at some of the vSphere VM performance data, the PVS 7.7 VHDX Target was typically hovering at 30-35MB/s. The visual below shows PVS 7.7VHD and VHDX capture network data side-by-side.

Next, I want to share results using the new feature in PVS 7.7 that enables direct capture of the system drive to a vDisk. In my example, I’m going to capture directly to the SMB share that’s hosted on my Scale-Out File Server. In the PVS Target Wizard, I’ll select Create an image file. For the data points below, I’ll call this ‘VHDX Direct’:

For the path, I’ll point the destination at my SOFS SMB share and use the VHDX format:

Comparing the capture time, PVS 7.7 VHDX capturing directly to a SOFS SMB share is 5-10X faster, bringing the capture time down to 2 minutes and 6 seconds. Wow this is great:

Looking at some of the vSphere VM performance data, the PVS 7.7 VHDX Target peaked to almost 200 MB/s during the capture (10X the throughput compared to VHD capture through PVS server). The visual below shows PVS 7.7 VHD, VHDX, and VHDX Direct to SMB share network data side-by-side.

Once it’s captured, we just need to import the VHDX into the PVS console, saving over 15 minutes to the capture experience:

For the next set of tests, I’ll show a couple comparisons of VHD Merge operations compared to VHDX, using the SOFS SMB share.

For this test, I took my vanilla vDisk for PVS 7.6 (VHD only) and PVS 7.7 (VHD and VHDX), created a new Version and installed Microsoft Office 2013. The results would probably be exponentially better for merging multiple versions into the base, but a single diff (avhd vs. avhdx) is an easy to reproduce testing scenario.

For PVS 7.6, the .avhd was approximately 3.7GB in size:

For PVS 7.7, the .avhd and .avhdx were approximately 3.9GB in size:

I’ll use the same process for both PVS 7.6 and PVS 7.7 to merge the versions (Merged Base: Last base + all updates from that base):

The following shows the results merging the VHD and VHDX base disks in PVS 7.6 VHD (10m0s), PVS 7.7 VHD (8m40s), and PVS 7.7 VHDX (4m20s):

As you can see from this visual, merging a 20GB base with a 4GB delta is half the time comparing PVS 7.6 VHD to PVS 7.7 VHDX. Also, it’s safe to say with this type of network/disk performance using PVS over a Scale-Out File Server SMB share is absolutely acceptable for a production environment. While my previous recommendation was to use a Maintenance Store when using PVS with SOFS, that’s definitely no longer a requirement.

One additional factor that I evaluated for this test was disk/storage impact during merge operations, comparing PVS 7.7 VHD to PVS 7.7 VHDX. VHDX performance was significantly better due to the 1MB alignment for VHDX vs. VHD which is not boundary aligned.

Watching Perfmon on the SOFS nodes, the following disk performance results were observed.

First, I’ll show you the Average Disk MB/s throughput (combined reads and writes) during a merge operation:

7.7 VHD Disk Ave. MB/s

7.7 VHDX Disk Ave. MB/s

50

193

Next, I’ll show you the breakdown, showing double performance for both Reads and Writes for VHD vs. VHDX:

7.7 VHD Disk Ave. Read MB/s

7.7 VHDX Disk Ave. Read MB/s

7.7 VHD Disk Ave. Write MB/s

7.7 VHDX Disk Ave. Write MB/s

38

97

38

96

Looking at IOPS, we also see good improvements. Here’s the overall IOPS (Disk Transfers/sec) during a merge:

7.7 VHD Disk Transfers/sec

7.7 VHDX Disk Transfers/sec

218

373

Here’s the breakdown, showing transfer performance for both Reads and Writes for VHD vs. VHDX:

7.7 VHD Reads/sec

7.7 VHDX Read/sec

7.7 VHD Writes/sec

7.7 VHDX Writes/sec

124

358

94

15

Additionally, testing results in this blog post should have you convinced that the default VHDX format for PVS 7.7 vDisks should definitely be used moving forward! If you have a bunch of legacy PVS VHD formatted vDisks, you may have to debate converting these to VHDX to benefit from these dramatic performance improvements.

Congratulations to the PVS team in delivering a fantastic release with phenomenal performance improvements for basic admin operations. I’m really happy with all the results I’m seeing comparing PVS 7.6 to PVS 7.7, and hope to see broad adoption of VHDX. Additionally, the ability to capture locally or directly to a file or SOFS SMB share is definitely going to be leveraged for my deployments moving forward.

As always, if you have any questions, comments, or just want to leave feedback, please do so below. Thanks for reading!

@youngtech

Share Button
  1. WotanWotan01-17-2016

    Hello,

    Would I somehow be able to convert my current Golden-Images (vhd, Dynamic) to VHDX?

    Thank you!

    • Dane YoungDane Young01-17-2016

      I haven’t tried this, but I suspect it would likely involve reverse imaging the vDisk back to a local hard disk (using %ProgramFiles%\Citrix\Provisioning Services\BNImage.exe).
      Let me know if you need any pointers on this process.
      Thanks!
      -@youngtech

    • TrententTrentent01-18-2016

      if you use cvhdmount on the VHDX and VHD images you can use any cloning program you like to do a disk-to-disk. I like http://hddguru.com/software/HDD-Raw-Copy-Tool/ as it’s pretty fast that way.

      • Dane YoungDane Young01-26-2016

        Great feedback! Thanks Trentent.
        –@youngtech

  2. Joeri KumbruckJoeri Kumbruck01-12-2016

    Great article Dane! Thanks.

  3. TrententTrentent01-11-2016

    PVS 7.1/7.6 you can make VHD files with 16MB block sizes that are supposed to be 4k aligned. What would be the performance difference of these VHD’s vs the VHDX’s?

  4. Nick RintalanNick Rintalan01-11-2016

    Great stuff, Dane. Glad you’ve found some of the “hidden” features of 7.7. I have one more to show you that I think you’ll like, too. Should have an article published shortly on it.

    -Nick

  5. G. JongeneelG. Jongeneel01-11-2016

    Very nice and informative article! Thanks.

1 2 3

Leave a Reply to virtualchunk Click here to cancel reply.