Provisioning Services Shared Volumes Using Read-Only Managed Stores

Share Button

When discussing options for vDisk storage placement and provisioning services there are two primary options: Block-based attached storage (Locally attached, iSCSI, FC, FCoE) and network attached storage (CIFS or NFS). One of the best resources to understand the advantages of Block-Based attached storage and NTFS file caching is the Advanced Memory and Storage Considerations for Provisioning Services document. After understanding the concepts in this document any Citrix engineer would choose block-based over network storage any day of the week! If you’ve made the mistake I made this would result in local storage for each Provisioning Server in a site and manual or DFS replication processes to ship updated versions of the vDisks. This blog post is designed to inform and instruct that this does not have to be the case in virtual or physical PVS environments where SAN storage is within reach!

In a Windows Server Cluster, write access to the shared volume is controlled by the witness disk or share. With Provisioning Services, Citrix provides a similar mechanism for controlling write-access using a feature called a Read-Only Managed Store and Maintenance/Active Mode.  The example configuration below shows how to setup shared volumes for virtual provisioning servers running on vSphere 4.x.

Before we dig into the configuration, let’s talk about some design considerations:

– Shared volume presented from SAN storage

– Each vDisk (VHD and PVP) has a dedicated shared volume

– Each shared volume has a dedicated store marked as a Read Only Managed Store

– Shared VMDKs sized slightly larger than the anticipated VHD size for overhead and PVP file

– Provide a path to a read/write location for the write cache for the server failback option

Example configuration diagram:

 

To start, I’ll assume you’ve already built out two Provisioning Servers running PVS 5.6 SP1 on Windows Server 2008 R2. You can create the shared vmdks through the vSphere client or through SSH. Through the vSphere client when adding a new disk, select the destination datastore and choose the Support clustering features such as Fault Tolerance option:

Select SCSI (1:0) to add the new disk to the first channel on a new SCSI controller:

Once the new disk is added, you will see the new SCSI controller.  Change the Controller type to Paravirtual (optional) and the SCSI Bus Sharing to Physical to support clustered storage access from multiple ESX hosts:

Click OK to save the virtual machine configuration and the new virtual disk will begin spinning up.  This process may take upwards of 5-10 minutes depending on storage performance and size of disk as ESX is zeroing out the contents of the new disk.

One disadvantage of creating through the GUI is lack of control of placement or vmdk filename.  For example in the screenshots above, ESX will create the new VMDK in the following location: datastore002/VMPVS01/VMPVS01_1.vmdk.  This lack of user control works great for standard virtual machines, but we want a little bit greater control.  Instead, I prefer to use SSH to create the VMDK manually in the appropriate location and file name, then attach an existing disk. Use the following syntax: vmkfstools -c xxGB Filename.vmdk -d eagerzeroedthick -a lsilogic

Once the vmdks are all created, add them to each virtual machines using the “Existing” hard disk option and selecting SCSI channels on the paravirtual/physical bus adapter (SCSI 1:0, 1:1, 1:2, etc). Online, Initialize, Format and Mount each of the shared volumes on the first provisioning server:

Next, create Provisioning Services stores for each of the shared drives, marking them as Managed Read-Only Stores:



Mark all of the servers in the site that have access to this shared store:

Provide the path to the root of the shared volume and a read/write path for the write cache:

Once all the stores have been setup, we need to put them into maintenance mode so the NTFS file system does not become corrupt.  Right click the store and select Manage Store:

Click next and you can verify that by default both provisioning servers have read/write access to the shared volumes. Leave Maintenance selected and click next:

Click Execute and PVS01 will be set to Read/Write while PVS02 is set to Offline:

From this point forward, vDisk maintenance can be performed from PVS01 without the risk of volume corruption. Once your vDisk is put into Standard Mode with cache on target device local HD, right click the store to set it to active mode (Read access on both servers):

Once target devices are in production, you should not put the store into maintenance mode without bringing all target devices offline.  Therefore, the best method is to copy the PVP and VHD files from the production store to a “Staging” store.  You can easily put the staging store into maintenance mode without affecting production, set the vDisk to private mode and perform your typical vDisk Automatic Update procedures.  When you’re all done, set the vDisk back to standard mode and switch the store back to Active (Read-Only) mode. This process is precisely why I suggest and demonstrate breaking apart the LUN into multiple VMDKs that can be brought online and offline without affecting other vDisks on the shared storage.

By the way, this environment can easily be built out using VMware Workstation. To enable clustered volume access for VMware Workstation, create and add the VMDKs to both of your provisioning servers (Don’t use thin provisioned disks). Follow the same procedures, putting the shared volumes on a new SCSI controller. Then, add the following lines to your PVS .vmx files to enable clustered volume access:

disk.locking = “false”

diskLib.dataCacheMaxSize = “0”

Feel free to comment below if you would like any further details on setting up managed stores for provisioning services shared volumes. Thanks so much,

–youngtech

Share Button
  1. robolrobol02-16-2012

    Hi there,
    I am trying to acheive the same results in citrix pvs 6.5 ? . I can’t see the options

    Click next and you can verify that by default both provisioning servers have read/write access to the shared volumes. Leave Maintenance selected and click next:

    Have you had any experience with that. Appreciate your post tho. Great work

    • youngtechyoungtech02-28-2012

      I believe you mean PVS 6, since 6.5 has not been released yet. Unfortunately this is one of the features that the product team decided was not worth continuing in PVS 6 as it is replaced by DFS-R support. I know, it’s a bummer, but I don’t think we’ll be getting read-only managed stores back in any future releases.
      Thanks,
      –youngtech

  2. Mike FosterMike Foster11-30-2011

    I understand your instructions, but I am assuming that I add the same disk to the second provisioning server at some point in this process. right?

    • youngtechyoungtech12-02-2011

      Mike, that’s correct. About half way through the blog you will see that the vmdk needs to be mounted to both PVS VMs: “Once the vmdks are all created, add them to each virtual machines using the “Existing” hard disk option and selecting SCSI channels on the paravirtual/physical bus adapter (SCSI 1:0, 1:1, 1:2, etc). “

Leave a Reply to robol Click here to cancel reply.