Installing and Configuring Atlantis USX 2.2 for VMware vSphere and NOW Citrix XenServer!

Share Button

Introduction and Overview

Several months ago, I wrote a getting started guide for Installing and Configuring Atlantis USX 2.0 on VMware vSphere. Since that time, two additional releases have become available and new to USX 2.2 is the ability to manage, present, and consume storage from Citrix XenServer! Read more about the USX on XenServer announcement here. At Atlantis Connect in early February, my friend and fellow Citrix Technology ProfessionalThomas Poppelgaard did a session with Mark Nijmeijer where Atlantis USX 2.2 was installed and configured on Citrix XenServer live (in beta at the time). To learn more about Atlantis Software Defined Storage and USX, visit www.atlantiscomputing.com.

USX 2.2 officially released to web on Friday, February 20th. In this blog post, I’ll show you how to install and configure USX 2.2 for vSphere, but also show how USX can now be used to manage, present, and consume Software Defined Storage from VMware vSphere and Citrix XenServer! Of course, all of Atlantis USX HyperDup Data Services including Deduplication, Compression, Caching, and Coalescing (and much more) will be present, regardless of the hypervisor being utilized underneath. This is particularly useful for customers utilizing XenServer for NVIDIA GRID vGPU for Server Based Computing (XenApp) and Virtual Desktop Infrastructure (XenDesktop) workloads as Atlantis USX is the first hyper-converged storage solution for Citrix XenServer! Let’s get started.

Below is a topology of the environment I will be using throughout this blog post (click to enlarge).

USX 2.2 XenServer

As you can see from the topology, I have built six hosts, three for a vSphere 5.5 cluster, and three for a XenServer 6.5 resource pool. Each of the hosts are booting from a local (non-SSD) disk (120GB), representing a RAID1 System or Boot Volume or MicroSD. Each of the hosts have four Local SSDs (400GB each). In my configuration these SSD drives will not be in a RAID group as USX will automatically take care of data resiliency and replication across the servers and disks (similar to the VMware VSAN model). Other configurations and options exist, and you could certainly use a RAID controller and disk group, provided the form factor of your disks support it (no RAID options with Flash on DIMM or PCI-e based flash). I have created two networks, one to represent the Management and Storage network and one to represent the Virtual Machine network. For a production environment, I’d recommend breaking out the Management from Storage networks to avoid saturation during peak usage.

As before, the first step is to import the USX 2.2 Manager VM(s). The process is quite simple as USX is distributed in the common OVF format. In my environment, I imported this to my vSphere environment, but this could easily be imported and run from XenServer. After stepping through the steps in this blog post, I later imported the OVF to XenServer to validate this will work in environments that are 100% XenServer. Importing the OVF to XenServer works flawlessly. For details on importing to either vSphere or  XenServer, see section 3.1 of the Administrator’s guide: Deploy the USX Manager (Login required).

Next, I’ll show you what my vSphere and XenServer clusters look like before USX VMs have been deployed and configured.

VMware vSphere 5.5: Hosts and Clusters view of vCenter \ Datacenter \ Cluster:

001

VMware vSphere 5.5: Datastores and Datastore Clusters view:

002

Citrix XenServer 6.5: Infrastructure View

003

For my XenServer environment, I installed XenServer with only the local boot drive selected during install. As I mentioned, each of the hosts are booting from a local (non-SSD) disk (120GB), representing a RAID1 System or Boot Volume or MicroSD. Each of the hosts have four Local SSDs (400GB each). These SSD drives would not be in a RAID group as USX will automatically take care of data resiliency and replication across the servers and disks. To mount the Local SSDs as individual Storage Repositories (similar to the SSDs Datastores in vSphere), I ran the following commands to add the four SSD disks as storage repositories:

pvcreate sdb
xe sr-create name-label=”Local-SSD-1″ shared=false device-config:device=/dev/sdb type=ext

pvcreate sdc
xe sr-create name-label=”Local-SSD-2″ shared=false device-config:device=/dev/sdc type=ext

pvcreate sdd
xe sr-create name-label=”Local-SSD-3″ shared=false device-config:device=/dev/sdd type=ext

pvcreate sde
xe sr-create name-label=”Local-SSD-4″ shared=false device-config:device=/dev/sde type=ext

If you have more than four local SSDs, just keep incrementing sd(x) appropriately (i.e. sdb, sdc, sdd, sde, sdf, sdg, etc.). After the local Storage Repositories were created, I configured the Networks and added the servers to a Resource Pool. If you want to configure local storage after a XenServer host has been added to a resource pool, you must add the host-uuid= parameter to the xe sr-create command.

Initial Configuration of Atlantis USX 2.2

Once the USX Manager has been imported, configure the USX Manager for the first time. To do this, fire up a browser and navigated to the IP Address configured during the import process. Login with the default username and password (admin/poweruser):

004

When you first login, you’ll be guided through a number of first-time setup steps. Prior to getting started with these steps, I’ll typically review the global preferences and make a couple basic adjustments. To review the global preferences, select admin –> Preferences from the top right corner.

005

For this environment, I’ll perform the following:
– Select ‘Enable Advanced Settings’ (Many more options presented when configuring volumes)
– Deselect ‘Enable Hypervisor Reservations’ (I’m running in a lab with fairly constrained resources, so I’ll turn this off for now. Not recommended for production environments but very useful for engineering labs)
– Change ‘Max Storage Allocation’ to 80% (Again, because I’m running in a lab. Review your environment and consider how much storage you want allocated to USX)
– Change ‘Max Memory Allocation’ to 80% (Again, because I’m running in a lab. Review your environment and consider how much memory you want allocated to USX)
– Change ‘Max Flash Allocation’ to 80% (Again, because I’m running in a lab. Review your environment and consider how much memory you want allocated to USX)
– Click OK to save the preferences

Click Step 1: Add VM Manager:

006

You can see from the Virtualization Platform drop-down, there are now options for both VMware vSphere and Citrix XenServer. I’ll configure the vSphere environment first using the following:
– Name: Friendly name for the vCenter Server (VMVCSA5501)
– Address: IP Address for the vCenter Server (172.16.3.104)
– Username: Administrative username for vCenter Server (root)
– Password: Administrative password for vCenter Server (mypassword)
– Click OK to add the VM Manager

007

I’ll click the + button to add an additional VM Manager and repeat the above process for XenServer.  This time, I’ll enter the information and credentials for my XenServer Resource Pool Master.  After both have been entered, here’s what the VM Manager section will look like:

008

Click the back arrow (image) to return to the list of steps.

Selecting Hypervisors and Storage from vSphere and XenServer

Click Step 2: Select Hypervisors and Storage:

009

For my vSphere environment, I’ll select the Cluster and deselect the Local Boot disks from each of my hosts, leaving the SSDs selected:

010

Save changes when done. You’ll notice for VMware environments, the Disk Type will automatically be selected appropriately for each disk. This is because USX will pull this vSphere datastore flag (SSD or non-SSD) from each of the hosts in the environment.

For my XenServer environment, I’ll change the drop down to select my Resource Pool Master, I’ll select the Cluster and deselect the Local Storage disks from each of my hosts, leaving the SSDs selected.  This time, I’ll need to change the type property on each of the Flash disks as XenServer doesn’t have an SSD or non-SSD flag like vSphere does:

011

Save changes and return to the list of steps when done.

For Steps 3 and 4, refer back to my previous guide as these steps are nearly identical in USX 2.2. Here’s a summary of Step 3:

013

Creating All-Flash Hyper-Converged Infrastructure and Volume VMs on vSphere

Finally, let’s start creating some volumes! For the first volume type, I’ll deploy a Hyper-Converged Infrastructure Volume (new in USX 2.2) to the vSphere cluster using the following:

015

On the Advanced Settings page, I’ll select the following:

– Prefer flash for storage capacity
– Prefer flash for performance
– Workload Ratio: Write Heavy

As expected, the USX Manager does the heavy lifting of deploying out and configuring the Service VMs and Infrastructure Volume VM:

016

After a couple minutes (depending on performance in your environment), the USX Service VMs and Infrastructure Volume VM (and HA VM) will be powered up, checked in to the USX Manager, and we can proceed with the next step.

017

Next, I’ll create a Hyper-Converged 100GB Flash Volume on the VMware vSphere cluster using the following:

018

On the Advanced Settings page, I’ll select the following:

– Prefer flash for storage capacity
– Prefer flash for performance
– Workload Ratio: Write Heavy

Once created, I’ll click Manage Volumes to mount it to my VMware vSphere Cluster (Actions \ Mount Volume):

019

I’ll select my Cluster and click OK:

020

As expected, USX mounts the NFS Datastore to each of the hosts in the cluster:

021

Creating All-Flash Hyper-Converged Infrastructure and Volume VMs on XenServer

Let’s go ahead and repeat the same process for XenServer. First, we’ll create the Hyper-Converged Infrastructure Volume, then we’ll create a 100GB Flash Volume. I’ll create the Infrastructure Volume using the following:

023

On the Advanced Settings page, I’ll select the following:

– Prefer flash for storage capacity
– Prefer flash for performance
– Workload Ratio: Write Heavy

Much like vSphere, USX Manager initiates the deployment of the Service VMs, Infrastructure Volumes (and HA) VMs to the XenServer Resource Pool:

024

In XenCenter there is far fewer status messages during deployment. However, after a couple minutes, the process will complete and the Service VMs, Infrastructure Volume (and HA) VMs will be deployed:

025

I’ll create a Hyper-Converged 100GB Flash Volume on the XenServer Resource Pool using the following:

026

On the Advanced Settings page, I’ll select the following:

– Prefer flash for storage capacity
– Prefer flash for performance
– Workload Ratio: Write Heavy

After a couple minutes the Volume VM is ready to go:

027

Let’s go ahead and mount the new volume to the XenServer Resource Pool:

028

The volume is now mounted and visible as a new Storage Repository in XenCenter:

029

Summary

That’s it!  We now have two SDS volumes created, one for the VMware vSphere Cluster, and one for the Citrix XenServer Resource Pool!  At this point I’ll go ahead and create a couple more volumes, additional virtual machines and continue on with my testing.

Hopefully this blog post was informative and helps you visualize how Atlantis USX Software Defined Storage can be used to manage, present, and consume storage on VMware vSphere and NOW Citrix XenServer! To learn more about Atlantis Software Defined Storage and USX, visit www.atlantiscomputing.com. As always, if you have any questions, comments, or simply want to leave feedback, feel free to do so in the comments section below.

Thanks and good luck with the newest release of Atlantis USX 2.2!
@youngtech

Share Button
  1. JayJay01-06-2016

    Thanks a lot. your guide is very clear to follow and understand ..

    Heard that the 3.0 already have stretch cluster feature. I’ll wait for your blog on that feature.

  2. Mika KatajamäkiMika Katajamäki04-13-2015

    excellent article!
    I was wondering, since both test clusters/pools were so similar in hardware, was there any differences in performance? Also have you tried setting up clusters in two locations? Could it be this easy also?

    Mika

Leave a Reply