Getting Started with Atlantis USX! Basic Installation and Configuration Guide for the 2.0 GA Release

Share Button

About two months ago I joined Atlantis Computing based out of Mountain View, California as a Sr. Solutions Architect. After recent partner trainings, customer demos, POCs, and implementations, I felt the need to share a couple USX deployment scenarios and installation procedures to broaden understanding of USX in the community. USX 2.0 is now generally available and available to download to customers and key partners. If you want to know how to get access to USX, please feel free to reach out!

My colleague and VMware VCDX Hugo Phan has a ton of excellent USX resources that you can find over on his blog: vmwire.com. For this blog, I’m going to cover the basics, then quickly jump into the installation and configuration to show you just how quick and easy it is to get up and running with USX Software Defined Storage. If you’re not familiar with USX, I’d recommend you check out a couple of the following resources:

Atlantis Releases USX 2.0 – All-Flash Performance at Half the Cost of SAN Storage
Atlantis USX 2.0 Launch Webinar
Atlantis USX Solutions Brief
Storage Swiss Chalk Talk Video – How To Improve Software Defined Storage

To demonstrate capabilities of Atlantis USX, I’ve created the following topology diagram showing local (RAM/DAS) and shared (SAN/NAS) resources:USX_Overview

In this diagram, you can see that we can have multiple volumes (including HA VMs for those volumes) in a cluster. We can even manage multiple clusters, datacenters, and vCenter Servers using a common USX infrastructure. In each of these environments, resources could consist of SAN, NAS, DAS, or cloud based object storage. For local or shared resources USX works well with traditional mechanical media (7.2k NL, 10k, 15k SAS/FC), flash (SSD, PCI-E, All Flash Arrays, Flash on DIMM), or even hybrid storage arrays.

For each volume, USX enables the administrator to pool and abstract storage from these resources to create purpose built volumes for applications, virtual machines, or large scale workloads. These resources can be combined to create performance and capacity pools, accelerating the modern software defined datacenter! Atlantis’ data services are provided at the volume/HA VM layer for capacity consolidation and performance acceleration real-time at ingest, within the hosts and clusters where the applications reside.

Pool

Before I get started with the install process, let’s have a quick discussion about the volume types available within USX 2.0:
– Hybrid (SAN/NAS or Local Storage)
– All Flash
– In-Memory
– Simple Hybrid (Equivalent to disk-backed ILIO)
– Simple In-Memory (Equivalent to diskless ILIO)

The two Simple volume types were introduced in USX 2.0 to accommodate Virtual Desktop Infrastructure (VDI) and Server Based Computing (SBC) scenarios, while the other three were available in previous versions of USX. The original three volume types are described quite nicely in the admin guide, shown below:

Hybrid

Hybrid is a converged storage architecture and is the default VV type. The Hybrid volume type uses both a memory pool and a capacity pool to provide consistent performance across a wide variety of underlying storage. Shared storage or DAS is used for the capacity pool, and memory accelerates performance. This volume type provides a good balance between performance and capacity.

Hybrid configuration using shared storageUSXDeploy_Mgmt_GuideV1.5.1.14.1

The following figure shows a hyper-converged configuration created using DAS.

Hybrid: hyper-converged configuration using DASUSXDeploy_Mgmt_GuideV1.5.1.14.2

In-Memory

All memory is from the memory pool that consists of RAM, local SSDs, flash PCIe cards, or flash on DIMMs. Memory is used both to accelerate performance and as primary storage. This configuration is not persistent, but data resilience is built-in by replicating data in memory on each of the servers from which memory is pooled. This volume type provides the best performance, but is higher risk because data will be lost if all of the servers from which memory is pooled fail. The In-Memory type is most suitable for stateless applications that require very high performance such as Hadoop, MongoDB, and Cassandra.

In-Memory volume type configuration

USXDeploy_Mgmt_GuideV1.5.1.14.3

All Flash

Flash arrays are pooled and optimized in a memory pool, or in a capacity pool that may use other storage such as shared SAN/NAS. The All Flash volume type supports using flash local such as a Fusion-io PCIe card. This volume type provides good performance, but lower capacity, and is a good choice for most applications, including Microsoft Exchange, Microsoft SQL Server, and Microsoft SharePoint.

All-Flash volume type configuration

USXDeploy_Mgmt_GuideV1.5.1.14.4

Before we get started configuring volumes, we need to make sure all our prerequisites are in place and we have a bit of initial setup to run through. The two most important of which are a vSphere 5.1 or later environment with: 1) Minimum of three hosts, and 2) 10GbE network. As a software-only solution, USX is imported into a vSphere environment by using the familiar “Deploy OVF Template” process. The OVF process is fairly straight forward, simply login to your vSphere client and select File –> Deploy OVF Template. Browse to the path of the USX Manager OVF and follow the wizard.  A couple key questions to fill out when importing the virtual appliance are shown below:

001

Once imported and powered on, fire up your browser of choice and navigate to the IP Address that you configured during the import process (192.168.3.3). Login with the default username and password (admin/poweruser):

002

When you first login, you’ll be guided through a number of first-time setup steps. Prior to getting started with these steps, I’ll typically review the global preferences and make a couple basic adjustments. To review the global preferences, select Settings –> Preferences from the top right corner.

003

Below are the default global preferences:

004

005

For this environment, I’ll perform the following:
– Select ‘Enable Advanced Settings’ (Many more options presented when configuring volumes)
– Deselect ‘Enable Hypervisor Reservations’ (I’m running in a lab with fairly constrained resources, so I’ll turn this off for now. Not recommended for production environments but very useful for engineering labs)
– Deselect ‘Include all local storage’ (Affects initial hypervisors and storage selection process)
– Change ‘Max Storage Allocation’ to 80% (Again, because I’m running in a lab. Review your environment and consider how much storage you want allocated to USX)
– Change ‘Max Memory Allocation’ to 80% (Again, because I’m running in a lab. Review your environment and consider how much memory you want allocated to USX)

A couple other handy options are to ‘Prefer SSD for VM OS disk’ and ‘Prefer shared storage for USX VM OS disk’ although I’ll leave those deselected for this walk-through. Once all global preferences have been adjusted, click OK and we’ll walk through the five initial setup steps. Before we do, here’s a quick review of the vSphere environment I’ll be using for this walkthrough from a host and storage perspective:

006[4]

You can see I have six hosts in this cluster and from a datastore perspective I have shared iSCSI SAN storage available (1.9TB) along with local SAS (256GB) and local SSD (64GB) in each of my hosts. This will provide an optimal environment to demonstrate USX’s capabilities to accelerate existing SAN/NAS or create a hyper-converged storage platform from local DAS. Let’s get started!

Click Step 1: Add VM Manager:

007

Enter a friendly name for your VM Manager (vCenter Server), IP Address/DNS, Username, and Password. Click OK once complete:

008

Review and click the small Back image arrow in the top left corner to return to the list of steps.

009

Click Step 2: Select Hypervisors and Storage:

010

Select the applicable hosts and local storage from the list of available options:

011

If applicable, select the Shared Storage tab and select available iSCSI/FC/NFS storage presented to the hosts:

012

Once you’ve completed your selection, be sure to click the floppy icon image to save your changes. When changes to this screen have been saved, the little red tick marks will be removed (these red tick marks indicate unsaved changes):

013

Click the back arrow to return and Click Step 3: Define Networking

014

USX gives you the option to isolate storage traffic from management traffic, if you have a non-routed iSCSI/NFS network for example. Isolating this traffic allows you to have ICMP/SNMP tools monitor the routed interfaces, while keeping the NFS/iSCSI exports on dedicated back-end networks. For my simple environment, I’ll just use a single flat network, so I’ll select ‘Yes’ for the Storage Network field. Enter applicable subnet details and click Next:

015

Select the appropriate hosts and click the drop down to select the corresponding VMware Port Group for this network (VMware Standard Switch and vSphere Distributed Switch port groups will both be available). Click Next to proceed. My lab environment is very simple, just a single flat 10GbE network for all traffic:

016

Review and click Finish:

017

If you need to add additional networks, now is the time to do so:

018

Otherwise, click the back arrow and click Step 4: Add Name Template:

019

The name template will define the naming convention to be used when deploying service/aggregator VMs. This does not affect the volume VM names, those you will be able to name manually during the volume creation process. Configure the template name (friendly name for reference) and prefix and you’ll see a sample below. Click OK:

020

Click the back arrow and we’re ready to proceed to Step 5: Create Volume!

021

For the volume type, you’ll see the five options I mentioned earlier in this blog post (Hybrid, All Flash, In-Memory, Simple Hybrid, and Simple In-Memory):

022

For my initial volume, I want to show you an example of accelerating SAN/NAS with host RAM, so I’ll select Hybrid. Here’s an example of what my environment will look like when completed:

USX_3

I’ve entered a volume name, capacity, selected NFS for the protocol (preferred over iSCSI for vSphere), and a name template:

023

On the Advanced Settings page, I’ve selected ‘Prefer Shared Storage for Volume’ which auto-selects the ‘Fast Sync’ option. If desired, I could select ‘Prefer flash memory for volume’ at which point USX will use the local SSD disks in each host for the performance tier. If I don’t select this option, server RAM will be used for the performance tier (default behavior). The Workload Ratio can be adjusted for Read Heavy, Write Heavy, Balanced, or Custom ratios. This affects the percentage of performance pool relative to capacity. When I click Next, the deployment planner automatically kicks off and starts the deployment.

024

At this point, USX deploys the OVFs for each of the Service VMs to each of the hosts, bootstraps the VMs, deploys, and configures the volume:

025

Once completed, we’ll have a service VM deployed on each host along with the volume VM:

027

When the volume has completed deployment, you’ll see all kinds of fun visuals and statistics about the volume under the Manage section (more data is available under Analytics). Once the VM has fully fired up, each of these fields will start to populate data. This will become much more useful as we begin to load up the volume with some virtual machines and data:

026

The last piece that I’ll want to do is deploy an HA VM for this volume and mount it to each of the vSphere hosts. To do this, I’ll click ‘Manage Volumes’. With the volume selected, I’ll click the Actions button and select ‘Enable HA’:

028

I’ll deselect Enable Shared HA Service VMs for now and click OK to deploy the HA VM:

029

USX fires off the vSphere OVF deployment operation for the HA VM:

030

To mount the volume to each of the hosts in my cluster, I’ll click Actions and ‘Mount Volume’:

031

I’ll select my vCenter server from the drop down and check each host to mount the volume as an NFS Datastore:

032

When I click OK, USX fires off the ‘Create NAS Datastore’ task for each of the hosts and I now have my NAS datastore mapped in vSphere:

033

For NAS/NFS volumes, USX will report the free space post-deduplication. So, if I load up 80GB worth of data into my 100GB volume, vSphere will report the available space after dedupe. Most likely this would not be 20GB unless I was getting horrid dedupe ratios.

For the next volume type, I want to make use of those SAS and SSD disks in each of my hosts, so I’ll deploy a hyper-converged volume. Here’s an example of what this volume will look like when I’m finished:

USX_2

Let’s walk through this in the USX interface. I’ll select Create Volume with the following properties: Volume Type:Hybrid, Volume Name: usx2-hybridhypcon-vol1, Capacity: 100GB, Protocol: NFS, VM Template name:

034

For the Advanced Settings I’ll select ‘Prefer flash memory for volume’:

035

USX will automatically reconfigure the service VMs to accommodate this new hyper-converged volume and kicks off the Deploy OVF Template process for the new volume:

036

Once the new volume VM is online, we’ll see similar volume-level statistics for the new volume:

037

We’ll go ahead and mount the volume and enable HA for this volume using the same procedures described above.

038

When we’re done, we now have a total of 10 virtual machines deployed consisting of 6 Service VMs (one per host), 2 Volume VMs, and 2 Volume HA VMs. We can see the two new NFS volumes have been mounted to each host:

039

That’s great! In just a short time I’ve demonstrated how USX can be used to deploy hybrid volumes against both SAN/NAS and DAS. This is an excellent example of how quick and easy it is to get up and running with Software Defined Storage, typically perceived as complex, difficult to deploy and manage. When prerequisites are properly in place we can be up and running with USX in 7 easy steps.

Now that we have these volumes created, let’s do something interesting and useful with them. To start, I’m going to import a Thin PC Load Testing VM that a colleague of mine (Craig Bender) created called vdiLT. I’ll import and call this VM vdiLT-hybridhypcon-1, placing it on the hyper-converged volume I created. This VM has a 20GB system disk of which about 7GB is used space:

040

With the VM powered down, first thing we’ll do is clone the VM a number of times. If we look at the Host Storage view, we notice that Hardware Acceleration (VAAI) shows Supported:

041

As such, cloning the VM is a rather quick operation. (5:10:08 to 5:10:17, 9 seconds):

042

Mind you, a typical 20GB clone operation on traditional infrastructure without VAAI takes about 5-10 minutes depending on the environment’s host/network/storage performance, so this is a vast improvement. While we’re at it, since it finished copying in 9 seconds, I’ll go ahead and create another. After all, with dedupe and compression additional VMs don’t initially consuming any additional storage. So sure, let’s encourage some VM Sprawl for once! Why not!

In future blog posts I’ll demonstrate how we can use these newly created virtual machines to demonstrate the Teleport functionality and more!  Stay tuned…

If you have any questions, comments, or feedback, feel free to comment below, message me on twitter or e-mail. If you want to get in contact with your local Atlantis team, use the Atlantis contact form here.

Thanks!
–@youngtech

Share Button

Leave a Reply