Review: VMware VSAN turns storage inside-out

VMware's Virtual SAN 1.0 combines easy setup and management with high availability and high performance -- and freedom from traditional storage systems

Convergence of compute and storage is all the rage in the virtualization market these days. You see it in Microsoft's Windows Server 2012 R2 with Hyper-V and Storage Spaces. You see it in third-party platforms such as Nutanix. And you see it in VMware's vSphere flagship with the addition of Virtual SAN, a new capability built into the ESXi hypervisor that turns the direct-attached storage in vSphere cluster nodes into highly available, high-performance shared storage.

The goals behind Virtual SAN, or VSAN, are both to lower overall storage costs and to eliminate the I/O latencies associated with networked storage. VSAN achieves high availability by replicating storage objects (virtual machine disks, snapshot images, VM swap disks) across the cluster, allowing admins to specify the number of failures (nodes, drives, or network) to be tolerated on a per-VM basis. It addresses latency by leveraging flash-based storage devices for write buffering and read caching, along with support for 10GbE network connectivity.

[ Virtualization showdown: Microsoft Hyper-V 2012 vs. VMware vSphere 5.1 | Review: VMware vSphere 5.5 adds speed and usability | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ]

VSAN requires a minimum of three vSphere nodes to form a clustered data store. Each node in the cluster must have both SSD and HDD storage in order to join. To turn VSAN on requires enabling a single check box from the settings page for the vSphere cluster. You then select either Automatic or Manual for adding disks to the VSAN storage pool, and you're done. It's that simple.

VSAN, at least in its initial release, targets a short list of use cases. Not surprisingly, VDI (virtual desktop infrastructure) is the showcase scenario, with VMware's Horizon View product the first to take advantage of the new product. VMware even includes VSAN in the Advanced and Enterprise SKUs of Horizon View 6. Starting with version 5.3.1, Horizon View is specifically designed for use with Virtual SAN 5.5 data stores, meaning you'll need the latest ESXi 5.5 Update 1 to run the two together.

For this review, I was provided with hardware from Supermicro and Lenovo. The Supermicro system is a SuperServer SYS-F627R3-R72B+ with four independent nodes in a single 4U chassis. Each node has two Intel Xeon 2420 CPUs, 256GB of memory, five 2TB Seagate SAS 10K HDDs, and one 400GB Intel S3700 Series SATA SSD, along with two 10GbE and two 1GbE network interfaces. In addition to the SuperServer, Supermicro provided one of its SSE-X3348T 48-port 10GBase-T switches to connect the four nodes. Lenovo provided three ThinkServer RD340 1U servers, each with one Intel Xeon E5-2420 CPU, 64GB of memory, one 1TB Toshiba SAS 7200RPM HDD, one 100GB STEC M161SD2-100UCM SATA SSD, and three 1GbE network interfaces.

Note that the single HDD per node in the Lenovo cluster, while supported by VMware, is not recommended. For even a low-end VSAN node (supporting up to 15 VMs and 2K IOPS per node), VMware recommends at least five 1TB HDDs (NL-SAS). You will likely want more RAM, a larger SSD, and more network (i.e., 10GbE) than my Lenovo nodes have as well.

VSAN architectureIt's important to understand how VSAN works in comparison to other "software-defined storage" solutions. First and foremost is the fact that VSAN is tightly integrated with the ESXi kernel. Tight integration with ESXi means that storage is provided directly through the hypervisor, and not via a separate VM (as in the case of Nutanix, for example). It also means that VSAN requires no additional software installation beyond VMware vSphere (that is, the ESXi hypervisor) and VMware vCenter Server.

Another important architectural point concerns the hardware requirements for the cluster. VSAN requires a minimum of three nodes in order to form the cluster and will support up to 32 nodes. With three nodes, you can tolerate a single node failure and still have a cluster of two. While the system will run in a degraded mode, you can't create the cluster without three nodes present. The default autoprovision mode creates a single VSAN data store that consists of all available space on all unprovisioned hard disks on each node.

Each node in the cluster must have a minimum of one hard disk drive and one solid-state drive in order to join the cluster. These disks must not be provisioned prior to joining the VSAN cluster and must be individually addressable. This means the disk controller or HBA (host bus adapter) must be configured in pass-through mode or each drive must be individually configured in RAID 0 mode. One last point to be aware of here is that the VSAN data store will only be available to nodes that are part of the VSAN cluster.

VSAN supports SAS, near-line SAS, and SATA hard disk drives, as well as SAS, SATA, and PCI Express solid-state drives. You'll want to check the VSAN hardware compatibility list to determine if a specific device has been tested. The inclusion of SATA opens up a wide range of devices for use in a VSAN cluster.

On each host or node, the drives are arranged into as many as five disk groups with one SSD and up to seven HDDs in each group. The SSD in each disk group acts as a caching tier; it does not contribute to the total capacity of the data store. VSAN stores everything on the clustered file system as an object, so is similar in that respect to the Nutanix solution (see my Nutanix review).

User settings in VSAN 1.0 are kept to a minimum:

Number of failures to tolerate. This is the number of concurrent host, network, or disk failures the cluster will tolerate and still ensure the availability of the object (such as a virtual machine disk). This value defaults to one, meaning the system will only tolerate a single failure. Increasing that number requires more hardware.

Number of disk stripes per object. This value defines the number of physical disks across which each replica of a storage object is striped. It defaults to one. Setting this value to greater than one might increase performance (when a request results in a cache miss), but it's not guaranteed.

Flash read cache reservation. This is the amount of flash capacity reserved on the SSD as read cache for the storage object with a default of zero. The VSAN scheduler handles cache allocation by default, although it is possible to increase the amount on an object basis to address performance issues.

Object space reservation. VSAN uses a thin-provisioning model for all objects with a specific amount reserved upon first initialization. The value for this setting is expressed as a percentage of the logical size of the object and will differ depending on the object. The value for VM swap defaults to 100 percent to reserve the full amount, while the value for virtual machine disks defaults to zero.

Force provisioning. This setting allows you to provision a storage object even if the policy requirements are not met by the VSAN data store (such as when the number of available nodes is no longer sufficient to meet the object's high-availability requirements).

Installing VSANCreating a VSAN cluster is simply the last step in creating a vSphere cluster and amounts to clicking a check box in vCenter Server. Of course, if you're starting completely from scratch, as I did, you'll need to install vSphere and vCenter.

For the initial boot and installation of vSphere/ESXi, I used the Supermicro IPMI management console and connected the VMware .ISO file as a remote media drive. Next, I installed to a USB key and repeated the process on the additional three nodes. The Supermicro box has a SATA Disk On Module in addition to an internal USB port for use as a boot device. The Lenovo servers have DVD drives along with USB.

One side effect of using all locally attached drives with VSAN is that you won't have any drives to use for a data store until you have the VSAN cluster up and running. With vSphere 5.5, you must use vCenter Server for all cluster management tasks, which poses a chicken-and-egg issue. I ended up using a Thecus N7710-G NAS storage box, which provides both iSCSI and NFS, as an external source until I got the VSAN cluster up and running.

One of the requirements for VSAN is that each disk drive must be individually addressable. For some controllers, this involves a simple setting to enable pass-through mode. However, for the Supermicro nodes, I had to use the LSI controller firmware to create individual drive groups for each drive and set the reliability to none. This RAID 0, single-drive configuration had to be accomplished for each of the five 2TB drives and the SSD on all four nodes. The process was essentially the same for the Lenovo nodes.

In short, once VMware vSphere and vCenter Server are installed, enabling VSAN could hardly be easier. The hardest part may be configuring the disk controllers. If you're lucky, your disk controller makes JBOD a simple check box item.

Figure 1: VSAN configuration details are readily available through the vSphere Web Client.

Managing VSANThe vSphere Web Client has seen many improvements since its introduction. For vSphere 5.5, you must use the new Web client for the large majority of management functions, including VSAN administration. The old Windows-based client is still there, but you won't be able to do much with it beyond basic VM management. You can still launch a remote console to any VM, which seems to work better and more consistently than the same process from the Web client.

I found the latest rendition of the vSphere Web Client to be more than adequate for most management tasks. At the same time, I found differences between the old and the new that took some getting used to. For some operations, it takes more than a few mouse clicks to navigate through the user interface and get to the point where you can actually make changes. That said, I really like the detail presented by the monitor page (see Figure 1) for both hosts and individual VMs.

VMware provides tools specifically for peering into the overall performance of the various moving parts of VSAN. For example, VSAN Observer provides a Web-based dashboard that shows latency, IOPS, bandwidth, and health statistics of the VSAN disks. Figure 2 shows the VSAN Observer dashboard with thumbnail graphs for each node in the VSAN cluster. Clicking on a "Full-size graphs" link opens up detailed graphs for each individual node.

VSAN supports VASA, the VMware APIs for Storage Awareness, allowing vCenter Server to report on a myriad of statistics and to implement storage policies (to ensure that storage requirements for virtual machines or virtual disks are met). Naturally, VSAN does not support VAAI, the VMware APIs for Array Integration, given there's no opportunity to offload storage operations from the host; you won't see a big performance boost for in-place cloning or reclaiming space with unmap. This is one area where a traditional SAN from EMC or NetApp would significantly outperform a VSAN solution. 

PowerCLI is VMware's PowerShell snap-in for driving vSphere from the command line. PowerShell is Microsoft's not-so-secret automation weapon, which means you'll need a Windows machine to actually run any scripts or use the command line. PowerCLI makes the repetition of commands much less painful and much less prone to error. I was able to use PowerCLI and PowerShell to automate much of the creating, modifying, starting, and stopping required to configure 32 virtual machines for all the performance testing described in the next section.

VSAN performanceOne of my goals in testing VSAN was to compare the level of performance available on low-cost hardware (the Lenovo three-node cluster) against the higher-end (the Supermicro four-node cluster) and to attempt to identify hardware-specific issues that could be improved with an upgrade. I measured performance by using the VMware I/O Analyzer, a freely downloadable tool from VMware Labs that makes the process of measuring storage performance easier by combining a commonly available tool (Iometer) with nifty, Web-based control magic.

Version 1.6 of the VMware I/O Analyzer (IOA) consists of a 64-bit Suse Linux Enterprise Server 11 SP2 virtual machine with two attached virtual disks. The first disk contains the operating system and testing software, while the second disk serves as the target for the tests. All Iometer traffic targets the second disk in raw mode, so it will write to the device directly, bypassing any file system.

Figure 2: The VSAN Observer dashboard displays all of the relevant statistics for VSAN nodes and drives.

In order to generate large amounts of traffic, VMware suggests using multiple I/O Analyzer VMs on each node in the VSAN cluster. To test both the four-node Supermicro cluster and the three-node Lenovo cluster, I used eight VMs on each node -- for a total of 32 worker VMs on the four-node cluster, and 24 on the three-node cluster -- with an additional I/O Analyzer VM on each serving as the controller node.

I/O Analyzer comes with a list of different workload types supporting a wide range of I/O sizes from 512 bytes to 512KB. Iometer provides the ability to specify the types and percentage of I/O operations, reads, and writes, along with the amount of time to run each test.

To compare my two clusters, I ran two different I/O Analyzer workloads to measure high write performance and a mixture of reads and writes. The Max IOPS test used a 512KB block size for 100 percent sequential read, while the combo test used 4KB blocks and a mix of 70 percent reads and 30 percent writes. The results of the two tests tell two different stories. Whereas the three-node cluster held its own against the four-node cluster in the Max IOPS test (roughly 154K vs. 190K maximum total IOPS), the four-node cluster proved vastly superior (yielding roughly double the performance) in the mixed workload test. The results of the mixed workload test are presented in the chart below. 

With more RAM, more CPU, larger SSD, and 10GbE networking, the three-node Supermicro cluster more than doubled the read and write performance of the three-node Lenovo cluster.

The single most important factor in VSAN performance will be the size of the SSD cache. If the data your workload requires is not found in the flash cache, but must be accessed from rotating disk, then I/O latency will shoot up and IOPS will fall dramatically.

Note that the results for the mixed workload test shown above make use of 4GB target virtual machine disks, which (when multiplied by eight I/O Analyzer workers per node) did not exceed the SSD cache size in either cluster (100GB SSD in the Lenovo nodes, 400GB SSD in the Supermicro nodes). When I ran the same benchmark using 15GB target disks for the Lenovo cluster and 50GB target disks for the Supermicro cluster (exceeding the SSD cache size on all cluster nodes) IOPS plummeted on both clusters.

In short, when configuring your VSAN cluster hardware, be absolutely sure to include enough flash in each node to exceed the size of the working data set. Naturally, more RAM and 10GbE networking are nice to have. VMware recommends 10GbE for most deployment scenarios. After all, the cost has dropped considerably over the last few years, and 10GbE offers significant improvements in performance over 1GbE.

Think global, store localVMware's Virtual SAN represents a significant step toward the stated goal of a software-defined data center. It's also somewhat of a "back to the future" experience, with storage moving into the local host machines and away from a centralized and dedicated storage appliance. My testing shows that VSAN is capable of delivering respectable performance on moderately priced hardware. Throw in 10GbE networking and you'll see impressive results on even the lowest-end hardware configuration.

Once you get past the initial disk configuration, the installation process is no different than any other VMware setup. Configuring and managing VSAN should be relatively painless for most customers. That said, VSAN is a 1.0 release: Those who need to tweak the settings may have to do some digging, reading, and testing to get what they want. The vCenter tools and the VSAN Observer offer deep insight into what's happening inside the kernel to help diagnose any significant issues. VSAN supports up to 32 nodes and 35 disks per node. If you do the math, you'll find that scales out to a whopping 4.4 petabytes of storage with current disk technology.

Published costs for VSAN start at $2,495 per CPU, which translates into roughly $20,000 for the high-end Supermicro cluster. For the Lenovo cluster, the price of VSAN would be $14,970 or roughly twice the price of the hardware. VMware also sells VSAN at a price of $50 per concurrent or named user for the aforementioned VDI scenario. That makes much more economic sense for smaller deployments. It also makes sense when you get into the higher-end configurations and begin to compare the price of VSAN with that of a traditional storage system from companies like VMware's parent EMC.

The final verdict comes down to economics and implementation. VSAN in the current release has a tightly focused target use case in VDI, where it offers compelling advantages in initial cost and long-term maintenance and support. The use cases for VSAN will undoubtedly broaden over time, but that's not a bad start for a version 1.0.

This article, "Review: VMware VSAN turns storage inside-out," was originally published at InfoWorld.com. Follow the latest developments in virtualization, data center, storage, and cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Read more about data center in InfoWorld's Data Center Channel.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags MicrosoftVMwarestorageData CentervirtualizationServer Virtualizationhardware systemsConfiguration / maintenanceStorage virtualizationStorage Area Network

More about AdvancedISOLenovoLinuxNASNetAppNutanixSASSeagateSuseThecus

Show Comments
[]