Provisioning Services High Availability with a Local vDisk Store

Share Button

When setting up high availability in a Provisioning Services environment there is always the question of where to put the vDisk store.  You have several different options for the location of the vDisk store.  You can place it on the local system, on a Windows CIFS share, Network Attached Storage, iSCSI SAN, or Fibre Channel SAN.  Each option has both its pros and cons.  For more information on the pros and cons of different vDisk store options see Citrix support article CTX119286 – Provisioning Server High Availability Considerations.  In this post we are going to focus on high availability with a local vDisk store. 

The steps for configuring high availability with a local vDisk store are pretty straight forward.  The main concern is the space needed on each Provisioning Server if you have multiple vDisk images.  The good side is that drives are cheap these days.  Using a local vDisk store for Provisioning Services high availability is also cheaper than purchasing Network Attached Storage and an iSCSI or Fibre Channel SAN.  It also offers more redundancy than a Windows CIFS share because having a vDisk store on each Provisioning Server eliminates a vDisk store on a Windows CIFS share on a file server from being a single point of failure.  You could also cluster the file server for the Windows CIFS share, but then you are still spending extra money on some type of shared storage for the cluster.  Another concern is the number of disks you can install in your servers.  If you have enough room to split the operating system and vDisk store on different RAID sets, then do so.  Having the RAID sets on different RAID controllers would be even better.  I strongly recommend this.  A good example would be a server with six disk drive bays.  Use two drive bays for your operating system in a mirror or RAID 1 set.  Then use the other four drives bays for your vDisk store in a RAID 5 set with a spare or RAID 10 set.  The more spindles, the better performance.  The only downside to using high availability with a local vDisk store is that you have manually copy the vDisk files to all of your Provisioning Servers in the farm.  I’m sure this can be automated with scripting or another process. 

So let’s go through the process of setting up Provisioning Services high availability with a local vDisk store.  Follow the steps below to configure high availablity with a local vdisk store:

1. Configure the vDisk store to be available on all Provisioning Servers in the farm with the same path to the vDisk store – example D:vDisks.

2. Setup a vDisk on one of the Provisioning Servers in the farm without load balancing or high availability enabled.  Make sure the vDisk is in Private Mode (single device, R/W access).

3. Create a Target Device and assign the vDisk to it.

4. Boot the Target Device to the vDisk and create your master image on the vDisk using XenConvert.

5. Shutdown the Target Device and make a copy for backup/updates.  Keep the backup/updates copy on this server.  Do not copy the backup/updates copy to the other Provisioning Servers.  Also do not enable load balancing and high availability on the backup/updates copy.

6. Put the vDisk in Standard Mode (multi-device, write-cache enabled).

7. Copy the vDisk (both .vhd and .pvp files) to the remaining Provisioning Servers in the farm to the same path you created the original vDisk at in step 1 – example D:vDisks.

8.Once the vDisk is copied over (.vhd and .pvp files) to the remaining Provisioning Servers in the farm, enable load balancing and high availability on the vDisk.

9. Create the remaining Target Devices and assign the vDisk to them.

10. Now you can boot Target Devices to the vDisk. 

Note: When you need to update your image, use the backup/updates copy.  Make sure the backup/updates copy is in Private Mode (single device, R/W access) and assign it to one Target Device.  Make your changes/updates to the image and start with step 5 to update all of your Provisioning Servers with highly available load balanced vDisk using a local vDisk store.  Another thing to be aware of is that copying large vDisk files during the day between Provisioning Servers can cause performance issues and put unecessary load on your Provisioning Server network cards.  Copying vDisk updates between Provisioining Servers should be done during non production hours.

You should now see the Target Devices get load balanced across the Provisioning Servers in the farm.  The Target Devices will also fail over between Provisioning Servers if one goes down since high availability is also enabled.  When the downed Provisioning Server comes back up, you can right click on the Provisioning Servers and click rebalance devices to distribute Target Devices evenly in your Provisioning Services farm.

If you have found this article interesting or if you have any other insights, please feel free to leave comments on this article.

Share Button
  1. Rich NicklesRich Nickles11-05-2009

    This is the exact scenario we built in August 2009. Two Provisioning Servers hosting identical vDisk images in HA for 56 virtual XenApp servers on 14 XenServers. All local disk. We use a script to copy the vDisk image to the other PVS server as well as to a network repository for backup to tape. When designed the environment, it seemed confusing (and risky) since we could find zero documentation on what appeared to be the simplest configuration. However, we’ve been very impressed with the result and the simplicity of the architecture.

  2. Jarian GibsonJarian Gibson09-11-2009

    John,

    Great point! I am actually working on another article about Provisioning Servers HA with a read-only vDisk store. Should be posted soon. Thanks!

  3. John ChaissonJohn Chaisson09-11-2009

    A big thing you never really find when looking through documentation is that when you use a SAN and want HA with Provisioning servers is that you should invest in a Clustering File System. To prevent the vDisk from allowing multiple clients to write or read from a vDisk at the same time. Causes errors with the locks and corruption of vDisk also.

  4. Nice Walk-through. I use this method in our environment with 4 PVS Servers, and have learned it’s a bad idea to copy large vDisks over the network between PVS Servers during the day – it puts unnecessary load on the NICs.

    • Jarian GibsonJarian Gibson09-04-2009

      You are absolutely right. I missed that in my notes. Arcticle updated. Thanks Roger!

1 2 3 4

Leave a Reply