PernixData FVP 2.5 – Quick and Painless Host-Side Read and Write SAN or NAS Storage Performance Acceleration

Share Button

A year or two ago I started keeping tabs on PernixData, a Silicon Valley based storage performance acceleration startup, around the time well known and regarded industry expert Frank Denneman joined as their Chief Technologist. Frank joined several other former VMware elite including Poojan Kumar and Satyam Vaghani, the great minds behind VMware data products and creator of the Virtual Machine File System (VMFS). With such an all-star former VMware team, I knew PernixData would be bound for greatness, but I have just recently started evaluating their FVP products on my own (starting with FVP 2.0). FVP 2.5 released a little over a week ago, and I am excited to share with you some additional information along with a quick run through of my experiences installing and configuring in the lab. This is the fourth ‘dot release’ (1.0, 1.5, 2.0, 2.5) since FVP first shipped in 2013, and it’s very well polished.

What is FVP and why is this important?

FVP is a SAN or NAS shared storage performance acceleration solution that leverages vSphere host-side RAM or Flash to create performance pools known as FVP Clusters. It can be used transparently for writes (Write Through Mode) or more importantly in-line using a Write-Back mode. FVP Write-Back capabilities are pretty unique as they allow you to create various failure domains (called Fault Domains) to ensure that the volatile writes are protected against Chassis, Rack, or Row failures. These writes would still be volatile in a total datacenter blackout, so plan power and cooling appropriately!  It’s incredibly quick to install and configure, and doesn’t require storage migration, changes to virtual machines, or application rewrites.  Simply install the vSphere Extensions, install and configure the FVP Management Server, and configure VMs/Datastores to accelerate. Painless!

image

As I mentioned, FVP can accelerate both reads and writes using the write-back mode (replicating writes to other hosts in FVP Clusters) or just reads using Write Through mode. This is simply a policy configuration that can be changed at a datastore level or specific virtual machines. This is very helpful as there could be a mixture of sensitive and non-sensitive write consistency virtual machines in an environment or datastore. What’s also useful is that FVP can be configured with an understanding of failure domains (called Fault Domains), and these can be created when determining replication requirements for writes within or outside various failure boundaries (blade chassis, rack, row, etc.). What makes FVP slightly different from other players is that much like VMware VSAN, FVP sits within the vSphere kernel level via host extension modules (vib package). As a primer for FVP, read the datasheet here.

FVP 2.5 Installation and Configuration

The following is a simple topology of the lab environment I will be using for this blog post (click to enlarge):

FVP 2.5 vSphere

As you can see from the topology, I have built three hosts for a vSphere 5.5 cluster. Each of the hosts are booting from a local (non-SSD) disk (120GB), representing a RAID1 System or Boot Volume or MicroSD. Each of the hosts have four Local SSDs (400GB each). In my configuration these SSD drives will not be in a RAID group as FVP will automatically take care of data resiliency and replication across the servers and disks. Other configurations and options exist, and you could certainly use a RAID controller and disk group, provided the form factor of your disks support it (no RAID options with Flash on DIMM or PCI-e based flash). It’s important for FVP that no VMFS datastores be created on these disks. Due to the kernel level interaction. FVP will be owning these flash disks, so it’s not possible to have VMFS datastores or virtual machines stored locally once they have been added to an FVP Cluster.

FVP can be used to accelerate any type of storage that is on the vSphere HCL and can be presented to vSphere hosts. For example in my environment, I have NFS storage presented by a NetApp FAS and 3PAR FC storage presented via FCoE available. Let’s get started…

Prerequisites:

To stage the environment and get right into the point of installing and configuring FVP, I’ve pre-staged a VMware vCenter Server Appliance (VCSA) with vSphere Update Manager (VUM) that I will be using to install the FVP host extensions. Optionally, these vib packages can be copied to each host and imported manually, but in a larger environment VUM is definitely the way to go. Before configuring the FVP Management Server, I’ll load up the FVP vib package using VUM, but first let’s review the environment before FVP:

VMware vSphere 5.5: Hosts and Clusters view of vCenter \ Datacenter \ Cluster:

01[4]

VMware vSphere 5.5: Datastores and Datastore Clusters view:

02

Installation of the FVP VIB using vSphere Update Manager (VUM):

Navigate to Update Manager, Patch Repository. Click Import Patches:

03

For FVP 2.5, click Browse and Navigate to PernixData-host-extension-vSphere5.5.0_2.5.0.1-36001.zip:

04

Click Next. The Zip file will be uploaded. Review and click Finish:

05

Click Baselines and Groups and click Create next to Baselines:

06

Name the baseline and select Host Extension. Click Next:

07

Type Pernix in the search field. Select the newly imported extension and click the down arrow to add it to the new baseline. Click Next and Finish:

08[4]

Next, we’ll attach the new baseline to the cluster level object in vCenter. Navigate to Hosts and Clusters, select the cluster and click the Update Manager tab. Click Attach:

09

Click the checkbox next to the new baseline and click Attach:

10

Click Scan:

13

Click Remediate (if there are running virtual machines in your environment, you may want to plan this and/or perform the remediation off hours):

11

With the new baseline selected, click Next. I’ll accept all defaults for remediation, clicking Next all the way through and Finish. Note: You may need to temporarily disable Admission Control.

12

After a couple minutes, all hosts should show compliant with the attached profile. Great news! The VIB has successfully been installed:

14

Installation of the FVP Management Server:

For the FVP Management Server, we need a Windows Server running 2008 R2 or later. For my demo environment, I’ll use Windows Server 2012 R2. Next, we need to make sure .NET Framework 3.5 is installed:

15

For the FVP Management Server database, I’m going to use Microsoft SQL Server Express installed locally. Download SQL Server 2012 Express from:

http://www.microsoft.com/en-us/download/details.aspx?id=29062

All we need is ENU\x64\SQLEXPR_x64_ENU.exe, however if you want to download the Management Studio, you can also download ENU\x64\SQLManagementStudio_x64_ENU.exe. Once downloaded, launch an elevated command prompt and CD to the directory where you downloaded SQL Express:

Run the following command to install locally:

“SQL2012_SQLEXPRWT_x64_ENU.exe” /ACTION=install /IACCEPTSQLSERVERLICENSETERMS /SQLSVCACCOUNT=”NT AUTHORITY\System” /HIDECONSOLE /QS /FEATURES=SQL /SQLSYSADMINACCOUNTS=”BUILTIN\Administrators” /NPENABLED=”1″ /TCPENABLED=”1″ /INSTANCENAME=PRNX_SQLEXP

The installation will proceed unattended and will complete without prompting. I recommend rebooting once the install is done.

16

Now that the prerequisites are out of the way, we can proceed with the FVP Management Server installation from PernixData_FVP_Management_Server_2.5.07415.1.zip. Double click to run the installer:

17

Accept all defaults through the installation. When prompted, enter the vCenter information:

18

Select the database instance:

19

Select the IP Address or DNS FQDN if DNS is functioning properly. Click Next and Install:

20

Click Yes to install JRE:

21

When completed, click Finish:

22

Next, in vCenter we need to install the FVP Plugin:

23

Click Download and Install:

24

Click Run and select all defaults through installation. Accept the certificate and ignore if you see a certificate warning. Close the vSphere client and relaunch to reinitialize with the newly installed FVP Plug-in:

After re-launching, select the cluster level in Hosts and Clusters and click the PernixData tab. First time launching, click Create FVP Cluster:

25

Provide a cluster name and click OK. Click Add Resources:

26

For my environment, I’m only going to use the Flash devices (SSD drives), so I’ll change the drop-down to Flash and click the top level checkbox to select all local Flash disks. With all the disks selected, I’ll click OK:

27

After a couple minutes and many vSphere tasks later, all the disks are now added to the FVP Cluster:

28

Click Datastores/VMs and click Add Datastores:

29

For my NetApp NFS volume, I’ll use a Write-Back policy with a Write Replication of Local Host + 1 peer:

30

If desired, FVP can be configured per Virtual Machine instead of per Datastore. This is very convenient in case you have different data loss sensitivity requirements and want to do Write Back for some workloads and Write Through for others.

Before we jump into the various performance charts available, I’ll discuss Fault Domains. Fault Domains is probably the most unique and compelling feature of FVP. Fault Domains allows you to create a logical map of your datacenter for write back mode to ensure you have data protection across blade chassis, racks, or even rows. By default, everything is in one big fault domain appropriately called ‘Default Fault Domain’:

31

To break this up a little more, I can create three fault domains, each with one host:

32

Now, when I create a Write Back mode policy, I can select host(s) in same fault domain and host(s) in different fault domain:

33

The possibilities for these policies are endless, and this is a very slick interface for configuring fairly advanced configuration settings! Of course, any of these configurations would not protect volatile writes in the event of a total datacenter blackout, so it’s important to understand power and cooling failure domains when planning for volatile writes and write-back policies.

To create a little bit of data in the FVP cluster, I’ll reboot my test VM and navigate to the Usage tab:

34

As expected, a very clean and concise interface to show me what host my VM is running on, the Requested and Active Write Policy, and Acceleration Resources in play. It also shows me usage for this VM in each of the acceleration resources. Awesome. If I had more VMs in this cluster, I would be able to navigate through these various VMs and see stats for each. Next, I’ll click the Performance tab to review some of the stats available there.

This is not a high performance environment and I’m not doing any benchmarking, so disregard the various stats and figures. Visuals provided simply to see what the interface looks like.

Latency performance charts:

35

IOPS performance charts:

36

Throughput performance charts:

37

Hit Rate and Eviction Rate:

38

Write Back Destaging:

39

As you can see from these various performance metrics, it’s pretty extensive what data is available per datastore, per virtual machine, or per FVP Cluster. And, because everything is integrated with the FVP Plug-in to the vSphere client, it’s a very clean and easy to use interface.

Summary

I’ve definitely liked everything I’ve seen so far. In terms of Flash/RAM read and write caching and performance acceleration in a vSphere environment, it’s hard to get much simpler than this! Just drop in some SSDs into the local hosts, install and configure FVP, buy some licenses, and you’re good to go. No moving of data to a new datastore, everything is fairly seemless and integrated into the hypervisor and management stack, as you would expect from a solution that operates in the kernel level.

In terms of areas for improvement, I have to say I was surprised by the FVP Management Server piece. This blog post makes it seem much more simple, but it was a bit of trial and error at first to get everything just right. I even had to resort to some good old fashion RTFM when I got stuck with the local SQL Express database and authentication first time around.  For a solution that is completely integrated at all levels with vSphere, I personally would have liked to see a virtual appliance for the FVP Management Server.  I know that seems obvious, and I’m sure it’s on their roadmap, but this would be my only gripe with the overall admin experience in FVP 2.5. Everything else is incredibly simple and straight forward.

Hopefully this blog post was informative and helps you visualize how PernixData FVP 2.5 helps accelerate SAN/NAS storage using local Flash or RAM. It truly is quick and painless host-side read and write SAN/NAS storage performance acceleration. To learn more about PernixData FVP 2.5, go to www.pernixdata.com.

As always, if you have any questions, comments, or simply want to leave feedback, feel free to do so in the comments section below.

Thanks and good luck with the newest release of PernixData FVP 2.5!
@youngtech

Share Button
  1. Juan FernandezJuan Fernandez03-10-2015

    Excellent write up Dane! I’m especially glad you were able to grab all of those screenshots along the way.

Leave a Reply