PernixData FVP 2.5 – Quick and Painless Host-Side Read and Write SAN or NAS Storage Performance Acceleration
A year or two ago I started keeping tabs on PernixData, a Silicon Valley based storage performance acceleration startup, around the time well known and regarded industry expert Frank Denneman joined as their Chief Technologist. Frank joined several other former VMware elite including Poojan Kumar and Satyam Vaghani, the great minds behind VMware data products and creator of the Virtual Machine File System (VMFS). With such an all-star former VMware team, I knew PernixData would be bound for greatness, but I have just recently started evaluating their FVP products on my own (starting with FVP 2.0). FVP 2.5 released a little over a week ago, and I am excited to share with you some additional information along with a quick run through of my experiences installing and configuring in the lab. This is the fourth ‘dot release’ (1.0, 1.5, 2.0, 2.5) since FVP first shipped in 2013, and it’s very well polished.
What is FVP and why is this important?
FVP is a SAN or NAS shared storage performance acceleration solution that leverages vSphere host-side RAM or Flash to create performance pools known as FVP Clusters. It can be used transparently for writes (Write Through Mode) or more importantly in-line using a Write-Back mode. FVP Write-Back capabilities are pretty unique as they allow you to create various failure domains (called Fault Domains) to ensure that the volatile writes are protected against Chassis, Rack, or Row failures. These writes would still be volatile in a total datacenter blackout, so plan power and cooling appropriately! It’s incredibly quick to install and configure, and doesn’t require storage migration, changes to virtual machines, or application rewrites. Simply install the vSphere Extensions, install and configure the FVP Management Server, and configure VMs/Datastores to accelerate. Painless!
As I mentioned, FVP can accelerate both reads and writes using the write-back mode (replicating writes to other hosts in FVP Clusters) or just reads using Write Through mode. This is simply a policy configuration that can be changed at a datastore level or specific virtual machines. This is very helpful as there could be a mixture of sensitive and non-sensitive write consistency virtual machines in an environment or datastore. What’s also useful is that FVP can be configured with an understanding of failure domains (called Fault Domains), and these can be created when determining replication requirements for writes within or outside various failure boundaries (blade chassis, rack, row, etc.). What makes FVP slightly different from other players is that much like VMware VSAN, FVP sits within the vSphere kernel level via host extension modules (vib package). As a primer for FVP, read the datasheet here.
FVP 2.5 Installation and Configuration
The following is a simple topology of the lab environment I will be using for this blog post (click to enlarge):
As you can see from the topology, I have built three hosts for a vSphere 5.5 cluster. Each of the hosts are booting from a local (non-SSD) disk (120GB), representing a RAID1 System or Boot Volume or MicroSD. Each of the hosts have four Local SSDs (400GB each). In my configuration these SSD drives will not be in a RAID group as FVP will automatically take care of data resiliency and replication across the servers and disks. Other configurations and options exist, and you could certainly use a RAID controller and disk group, provided the form factor of your disks support it (no RAID options with Flash on DIMM or PCI-e based flash). It’s important for FVP that no VMFS datastores be created on these disks. Due to the kernel level interaction. FVP will be owning these flash disks, so it’s not possible to have VMFS datastores or virtual machines stored locally once they have been added to an FVP Cluster.
FVP can be used to accelerate any type of storage that is on the vSphere HCL and can be presented to vSphere hosts. For example in my environment, I have NFS storage presented by a NetApp FAS and 3PAR FC storage presented via FCoE available. Let’s get started…
To stage the environment and get right into the point of installing and configuring FVP, I’ve pre-staged a VMware vCenter Server Appliance (VCSA) with vSphere Update Manager (VUM) that I will be using to install the FVP host extensions. Optionally, these vib packages can be copied to each host and imported manually, but in a larger environment VUM is definitely the way to go. Before configuring the FVP Management Server, I’ll load up the FVP vib package using VUM, but first let’s review the environment before FVP:
VMware vSphere 5.5: Hosts and Clusters view of vCenter \ Datacenter \ Cluster:
VMware vSphere 5.5: Datastores and Datastore Clusters view:
Installation of the FVP VIB using vSphere Update Manager (VUM):
Navigate to Update Manager, Patch Repository. Click Import Patches:
For FVP 2.5, click Browse and Navigate to PernixData-host-extension-vSphere5.5.0_184.108.40.206-36001.zip:
Click Next. The Zip file will be uploaded. Review and click Finish:
Click Baselines and Groups and click Create next to Baselines:
Name the baseline and select Host Extension. Click Next:
Type Pernix in the search field. Select the newly imported extension and click the down arrow to add it to the new baseline. Click Next and Finish:
Next, we’ll attach the new baseline to the cluster level object in vCenter. Navigate to Hosts and Clusters, select the cluster and click the Update Manager tab. Click Attach:
Click the checkbox next to the new baseline and click Attach:
Click Remediate (if there are running virtual machines in your environment, you may want to plan this and/or perform the remediation off hours):
With the new baseline selected, click Next. I’ll accept all defaults for remediation, clicking Next all the way through and Finish. Note: You may need to temporarily disable Admission Control.
After a couple minutes, all hosts should show compliant with the attached profile. Great news! The VIB has successfully been installed:
Installation of the FVP Management Server:
For the FVP Management Server, we need a Windows Server running 2008 R2 or later. For my demo environment, I’ll use Windows Server 2012 R2. Next, we need to make sure .NET Framework 3.5 is installed:
For the FVP Management Server database, I’m going to use Microsoft SQL Server Express installed locally. Download SQL Server 2012 Express from:
All we need is ENU\x64\SQLEXPR_x64_ENU.exe, however if you want to download the Management Studio, you can also download ENU\x64\SQLManagementStudio_x64_ENU.exe. Once downloaded, launch an elevated command prompt and CD to the directory where you downloaded SQL Express:
Run the following command to install locally:
“SQL2012_SQLEXPRWT_x64_ENU.exe” /ACTION=install /IACCEPTSQLSERVERLICENSETERMS /SQLSVCACCOUNT=”NT AUTHORITY\System” /HIDECONSOLE /QS /FEATURES=SQL /SQLSYSADMINACCOUNTS=”BUILTIN\Administrators” /NPENABLED=”1″ /TCPENABLED=”1″ /INSTANCENAME=PRNX_SQLEXP
The installation will proceed unattended and will complete without prompting. I recommend rebooting once the install is done.
Now that the prerequisites are out of the way, we can proceed with the FVP Management Server installation from PernixData_FVP_Management_Server_2.5.07415.1.zip. Double click to run the installer:
Accept all defaults through the installation. When prompted, enter the vCenter information:
Select the database instance:
Select the IP Address or DNS FQDN if DNS is functioning properly. Click Next and Install:
Click Yes to install JRE:
When completed, click Finish:
Next, in vCenter we need to install the FVP Plugin:
Click Download and Install:
Click Run and select all defaults through installation. Accept the certificate and ignore if you see a certificate warning. Close the vSphere client and relaunch to reinitialize with the newly installed FVP Plug-in:
After re-launching, select the cluster level in Hosts and Clusters and click the PernixData tab. First time launching, click Create FVP Cluster:
Provide a cluster name and click OK. Click Add Resources:
For my environment, I’m only going to use the Flash devices (SSD drives), so I’ll change the drop-down to Flash and click the top level checkbox to select all local Flash disks. With all the disks selected, I’ll click OK:
After a couple minutes and many vSphere tasks later, all the disks are now added to the FVP Cluster:
Click Datastores/VMs and click Add Datastores:
For my NetApp NFS volume, I’ll use a Write-Back policy with a Write Replication of Local Host + 1 peer:
If desired, FVP can be configured per Virtual Machine instead of per Datastore. This is very convenient in case you have different data loss sensitivity requirements and want to do Write Back for some workloads and Write Through for others.
Before we jump into the various performance charts available, I’ll discuss Fault Domains. Fault Domains is probably the most unique and compelling feature of FVP. Fault Domains allows you to create a logical map of your datacenter for write back mode to ensure you have data protection across blade chassis, racks, or even rows. By default, everything is in one big fault domain appropriately called ‘Default Fault Domain’:
To break this up a little more, I can create three fault domains, each with one host:
Now, when I create a Write Back mode policy, I can select host(s) in same fault domain and host(s) in different fault domain:
The possibilities for these policies are endless, and this is a very slick interface for configuring fairly advanced configuration settings! Of course, any of these configurations would not protect volatile writes in the event of a total datacenter blackout, so it’s important to understand power and cooling failure domains when planning for volatile writes and write-back policies.
To create a little bit of data in the FVP cluster, I’ll reboot my test VM and navigate to the Usage tab:
As expected, a very clean and concise interface to show me what host my VM is running on, the Requested and Active Write Policy, and Acceleration Resources in play. It also shows me usage for this VM in each of the acceleration resources. Awesome. If I had more VMs in this cluster, I would be able to navigate through these various VMs and see stats for each. Next, I’ll click the Performance tab to review some of the stats available there.
This is not a high performance environment and I’m not doing any benchmarking, so disregard the various stats and figures. Visuals provided simply to see what the interface looks like.
Latency performance charts:
IOPS performance charts:
Throughput performance charts:
Hit Rate and Eviction Rate:
Write Back Destaging:
As you can see from these various performance metrics, it’s pretty extensive what data is available per datastore, per virtual machine, or per FVP Cluster. And, because everything is integrated with the FVP Plug-in to the vSphere client, it’s a very clean and easy to use interface.
I’ve definitely liked everything I’ve seen so far. In terms of Flash/RAM read and write caching and performance acceleration in a vSphere environment, it’s hard to get much simpler than this! Just drop in some SSDs into the local hosts, install and configure FVP, buy some licenses, and you’re good to go. No moving of data to a new datastore, everything is fairly seemless and integrated into the hypervisor and management stack, as you would expect from a solution that operates in the kernel level.
In terms of areas for improvement, I have to say I was surprised by the FVP Management Server piece. This blog post makes it seem much more simple, but it was a bit of trial and error at first to get everything just right. I even had to resort to some good old fashion RTFM when I got stuck with the local SQL Express database and authentication first time around. For a solution that is completely integrated at all levels with vSphere, I personally would have liked to see a virtual appliance for the FVP Management Server. I know that seems obvious, and I’m sure it’s on their roadmap, but this would be my only gripe with the overall admin experience in FVP 2.5. Everything else is incredibly simple and straight forward.
Hopefully this blog post was informative and helps you visualize how PernixData FVP 2.5 helps accelerate SAN/NAS storage using local Flash or RAM. It truly is quick and painless host-side read and write SAN/NAS storage performance acceleration. To learn more about PernixData FVP 2.5, go to www.pernixdata.com.
As always, if you have any questions, comments, or simply want to leave feedback, feel free to do so in the comments section below.
Thanks and good luck with the newest release of PernixData FVP 2.5!