Rethinking “storage efficiency” in HCI architectures–Part 1

Hyper-converged infrastructures (HCI) can bring several design and operational benefits to the table, adding to the long list of reasons behind its popularity. Yet, HCI also introduces new considerations in understanding and measuring technical costs associated with the architecture. These technical costs could be thought of as a usage “tax” or “overhead” on host resources. The amount attributed to this technical cost can vary quite drastically, and depends heavily on the architecture used. For an administrator, it can be a bit challenging to measure and understand. The architecture used by HCI solutions should not be overlooked, as these technical costs can not only influence the performance and consistency of the VMs, but dramatically impact the density of VMs per host, and ultimately the total cost of ownership.

With HCI, host resources (CPU, memory, and network) are now responsible for an entirely new set of duties typically provided by a storage array found in a traditional three-tier architecture. These responsibilities not only include handling VM storage I/O from end to end, but due to the distributed nature of HCI, hosts will take part in storage activity of VMs not local to the host, such as replicated writes of a VM, as well as data at rest operations and other services related to storage. These responsibilities consume host resources. The question is, how much?

This multi-part series is going to look at the basics of HCI architectures, and how they behave differently with respect to their demands on CPU, memory, and network resources. Operational comparisons are not covered in this series simply to maintain focus on the intent of this series.

"Storage efficiency" is more than what you think
The term "storage efficiency" is commonly associated with just data deduplication and compression. With hyper-converged infrastructures, this term takes on additional meaning. Storage efficiency in HCI relates to the efficiency of how I/Os are delivered to and from the VM. Efficiency of I/O delivery to and from VMs matter not only from performance and consistency as seen by the VM, but how much resource usage is introduced to the hosts in the cluster. The latter is often never considered, yet extremely important.

HCI Architectures
HCI solutions available in today’s market not only offer different data services, but are built differently, which is just one of the many reasons why it is difficult to generalize a typical amount of overhead that is needed to process storage I/O. All HCI solutions will vary (some more than others) on how they provide storage services to the VMs while maintaining resources for guest VM activity. The two basic categories, as illustrated in Figure 1 are:

  • Virtual appliance approach. A VM lives on each host in the cluster, delivering a distributed shared storage plane, processing I/O and the other related activities. Depending on the particular HCI solution, this virtual appliance on each host may also be responsible for a number of other duties.
  • Integrated/in-kernel approach. The distributed shared storage system is a part of the hypervisor, where key aspects of the storage system are part of the kernel. This allows for virtual machine I/O to traverse through the native kernel I/O path for the hosts participating in that I/O activity.

Figure1-HCIComparison

Figure 1. Comparing an I/O write between HCI architectures (simplified for clarity)

HCI solutions that use a VM to process storage I/O on each host reside in a context (user space) that is no different than application VMs running on the host. In other words, the resources allocated to this virtual appliance to perform system level storage duties, contend for the same resources as the VMs that it is trying to serve. HCI solutions built into the hypervisor maintain end-to-end control and awareness of the I/O. Since an in-kernel, integrated solution allows I/O to traverse through the native kernel I/O path, it uses the least "costly" way to use host resources. HCI solutions built into the kernel minimize the amplification of I/Os and the CPU and memory resources it takes to process those I/Os from end to end. Sometimes virtual appliance based HCI solutions will use devices on hosts configured in the hypervisor for direct pass-through (aka “VMDirectPath”) in an attempt to reduce overhead, but many of the fundamental penalties (especially as they relate to CPU cycles) of I/O amplification through this indirect path and context switching remain.

Addressing a problem in different ways
Why are their multiple approaches? Manufacturers may state many reasons why they chose a specific approach, and why their approach is superior. Most the decision comes from technical limitations and go-to-market pressures. An HCI vendor may not have the access, or the ability to provide this functionality natively in the kernel of a hypervisor. A virtual appliance approach is easier to bring to market, and naturally adaptable to different hypervisors since it is little more than a virtual machine to process storage I/O.

By way of comparison, those who have full ownership of the hypervisor can integrate this functionality directly into the hypervisor, and when appropriate, build some aspects of it right into the kernel, just as other core functionality is built into the kernel. Resource efficiency, hypervisor feature integration, as well as the contextual awareness and control of I/O types are typically the top reasons why it is beneficial to have a distributed storage mechanism built into the hypervisor.

Do both approaches work? Yes. Do both approaches produce the same result in VM behavior and host resource usage? No. Running the same workloads using HCI solutions with these two different architectures may produce very different results on the VMs, and the hosts that serve them. The degree of impact will depend on the technical cost (in resource usage) of the I/O processing, and other data services provided by a given solution.

This difference often does not show up until numerous, real workloads are put on these solutions. Just as with a traditional storage array, every solution is fast when there is little to no load on it. What counts is the behavior under real load with contending resources. This is something not always visible with synthetic testing. For HCI environments, the overall “storage efficiency” of the particular HCI solution can be better compared (assuming identical hardware and workloads) by looking at the following in a real HCI environment running production workloads:

  • The average number of active VMs per host when running your real workloads.
  • The performance characteristics of the VMs and hosts when running your real workloads while hosts are busy serving other workloads.

These measurements above take this topic from an occasionally tiresome academic debate, and demonstrates the differences in real world circumstances. Ironically, faster hardware can increase, not reduce, the differences between these architectural approaches to HCI. This is not unlike what occurs quite often now at the application level, where faster hardware exposes actual bottlenecks in software/application design previously unnoticeable with older, slower hardware.

Now that an explanation has been given as to why "storage efficiency" really means so much more than data services like deduplication and compression, the next post in this series will focus on CPU resources in HCI environments, and what to look out for when observing CPU usage behaviors in HCI environments.

Does the concept of host resource usage interest you? If so, stay tuned for the book, vSphere 6.5 Host Resources Deep Dive by Frank Denneman and Niels Hagoort. It is surely to be a must-have for those interested in the design and optimization of virtualized environments. You can also follow updates from them at @hostdeepdive on Twitter.

 

What does your infrastructure analytics really tell you?

There is no mistaking the value of data visualization combined with analytics.  Data visualization can help make sense of the abstract or information not easily conveyed by numbers.  Data analytics excels at taking discrete data points that make no sense on their own, into findings that have context, and relevance.  The two together can present findings in a meaningful, insightful, and easy to understand way.  But what are your analytics really telling you?

The problem for modern IT is that there can be an overabundance of data, with little regard to the quality of data gathered, how it relates to each other, and how to make it meaningful.  All too often, this "more is better" approach obfuscates the important to such a degree that it provides less value, not more.  it’s easy to collect data.  The difficulty is to do something meaningful with the right data.  Many tools collect metrics in an order not by which is most important, but what can be easily provided.

Various solutions with the same problem
Modern storage solutions have increased their sophistication in their analytics offerings for storage.  In principle this can be a good thing, as storage capacity and performance is such a common problem with today’s environments.  Storage vendors have joined the "we do that too" race of analytics features.  However, feature list checkboxes can easily mask the reality – that the quality of insight is not what you might think it is.  Creative license gets a little, well, creative.

Some storage solutions showcase their storage I/O analytics as a complete solution for understanding storage usage and performance of an environment.  Advertising an extraordinary amount of data points collected, and sophisticated methods for collection of that data that is impressive by anyone’s standards.  But these metrics are often taken at face value.  Tough questions need to be asked before important decisions are made off of them.  Is the right data being measured?  Is the data being measure from the right location?  Is the data being measured in the right way?  And is the information conveyed of real value?

Accurate analytics requires that the sources of data are of the right quality and completeness.  No amount of shiny presentation can override the result of using the wrong data, or using it in the wrong way.

What is the right data?
The right data has a supporting influence on the questions that you are trying to answer.  Why did my application slow down after 1:18pm? How did a recent application modification impact other workloads?  In Infrastructure performance, I’ve demonstrated how block sizes have historically been ignored when it came to storage design, because they could not have been easily seen or measured.  Having metrics around fan speed of a storage array might be helpful for evaluating your cooling system in your Data Center, but does little to help you understand your workloads.  The right data must also be collected at a rate that accurately reflects the real behavior.  If your analytics offerings sample data once every 5 or 10 minutes, how can it ever show spikes of contention in resources that impact what your systems experience?  The short answer is, they can’t.

The importance of location
Measuring the data at the right location is critical to accurately interpreting the conditions of your VMs, and the infrastructure in which they live.  We perceive much more than we see.  This is demonstrated most often with a playful optical illusion, but can be a serious problem with understanding your environment.  The data gathered is often incomplete, and how you perceived it by virtue of assuming it was all the data you need all lead to the wrong conclusion.  Let’s consider a common scenario where the analytics of a storage system shows great performance of a storage array, yet the VM may be performing poorly.  This is the result of measuring from the wrong location.  The array may have showed the latency of the components inside the device, but cannot account for latency introduced throughout the storage stack.  The array metric might have been technically accurate for what it was seeing, but it was not providing you the correct, and complete metric.  Since storage I/O always originate on the VMs and the infrastructure in which they live, it simply does not make sense to measure them from a supporting component like a storage array.

Measuring data inside the VM can be equally as challenging.  Operating Systems’ method of data collection assume they are the sole proprietor of resources, and may not always accurately account for that fact that it is time slicing CPU clock cycles with other VMs.  While the VM is the end "consumer" of resource, it also does not understand it is virtualized, and cannot see the influence of performance bottlenecks throughout the virtualization layer, or any of the physical components in the stack that support it.

VM metrics pulled from inside the guest OS may measure thing in different ways depending on Operating System.  Consider the differences in how disk latency in Windows "Perfmon" is measured versus Linux "top."  This is the problem with data collector based solutions that aggregate metrics from difference sources.  A lot of data collected, but none of it means the same thing.

This disparate data leaves users attempting to reconcile what these metrics mean, and how they impact each other.  Even worse when supposedly similar metrics from two different sources show different data.  This can occur with storage array solutions that hook into vCenter to augment the array based statistics.  Which one is to be believed?  One over the other, or neither?

Statistics pulled solely from the hypervisor kernel avoids this nonsense.  It provides a consistent method for gathering meaningful data about your VMs and the infrastructure as a whole.  The hypervisor kernel is also capable of measuring this data in such a way that it accounts for all elements of the virtualization stack.  However, determining the location for collection is not the end-game.  We must also consider how it is analyzed.

Seeing the trees AND the forest
Metrics are just numbers.  More is needed than numbers to provide a holistic understanding for an environment.  Data collected that stands on its own is important, but how it contributes to the broader understanding of the environment is critical.  One needs to be able to get a broad overview of an environment to drill down and identify a root cause of an issue, or be able to start out at the level of an underperforming VM and see how or why it may be impacted by others.

Many attempt to distill down this large collection of metrics to just a few that might help provide insight into performance, or potential issues.  Examples of these individual metrics might include CPU utilization, Queue depths, storage latency, or storage IOPS.  However, it is quite common to misinterpret these metrics when looked at in isolation.

Holistic understanding provides its greatest value when attempting to determine the impact of one workload over a group of other workloads.  A VM’s transition to a new type of storage I/O pattern can often result in lower CPU activity; the exact opposite of what most would look for.  The weight of impact between metrics will also vary.  Think about a VM consuming large amounts of CPU.  This will generally only impact other VMs on that host.  In contrast, a storage based noisy neighbor can impact all VMs running on that storage system, not just the other VMs that live on that host.

Conclusion
Whether your systems are physical, virtualized, or live in the cloud, analytics exist to help answer questions, and solve problems.  But analytics are far more than raw numbers.  The value comes from properly digesting and correlating numbers into a story providing real intelligence.  All of this is contingent on using the right data in the first place.   Keep this in mind as you think about ways that you currently look at your environment.

Understanding block sizes in a virtualized environment

Cracking the mysteries of the Data Center is a bit like space exploration. You think you understand what everything is, and how it all works together, but struggle to understand where fact and speculation intersect. The topic of block sizes, as they relate to storage infrastructures is one such mystery. The term being familiar to some, but elusive enough to remain uncertain as to what it is, or why it matters.

This inconspicuous, but all too important characteristic of storage I/O has often been misunderstood (if not completely overlooked) by well-intentioned Administrators attempting to design, optimize, or troubleshoot storage performance. Much like the topic of Working Set Sizes, block sizes are not of great concern to an Administrator or Architect because of this lack of visibility and understanding. Sadly, myth turns into conventional wisdom – in not only what is typical in an environment, but how applications and storage systems behave, and how to design, optimize, and troubleshoot for such conditions.

Let’s step through this process to better understand what a block is, and why it is so important to understand it’s impact on the Data Center.

What is it?
Without diving deeper than necessary, a block is simply a chunk of data. In the context of storage I/O, it would be a unit in a data stream; a read or a write from a single I/O operation. Block size refers the payload size of a single unit. We can blame a bit of this confusion on what a block is by a bit of overlap in industry nomenclature. Commonly used terms like blocks sizes, cluster sizes, pages, latency, etc. may be used in disparate conversations, but what is being referred to, how it is measured, and by whom may often vary. Within the context of discussing file systems, storage media characteristics, hypervisors, or Operating Systems, these terms are used interchangeably, but do not have universal meaning.

Most who are responsible for Data Center design and operation know the term as an asterisk on a performance specification sheet of a storage system, or a configuration setting in a synthetic I/O generator. Performance specifications on a storage system are often the result of a synthetic test using the most favorable block size (often 4K or smaller) for an array to maximize the number of IOPS that an array can service. Synthetic I/O generators typically allow one to set this, but users often have no idea what the distribution of block sizes are across their workloads, or if it is even possibly to simulate that with synthetic I/O. The reality is that many applications draw a unique mix of block sizes at any given time, depending on the activity.

I first wrote about the impact of block sizes back in 2013 when introducing FVP into my production environment at the time. (See section "The IOPS, Throughput & Latency relationship")  FVP provided a tertiary glimpse of the impact of block sizes in my environment. Countless hours with the performance graphs, and using vscsistats provided new insight about those workloads, and the environment in which they ran. However, neither tool was necessarily built for real time analysis or long term trending of block sizes for a single VM, or across the Data Center. I had always wished for an easier way.

Why does it matter?
The best way to think of block sizes is how much of a storage payload consisting in a single unit.  The physics of it becomes obvious when you think about the size of a 4KB payload, versus a 256KB payload, or even a 512KB payload. Since we refer to them as a block, let’s use a square to represent their relative capacities.

image

Throughput is the result of IOPS, and the block size for each I/O being sent or received. It’s not just the fact that a 256KB block has 64 times the amount of data that a 4K block has, it is the amount of additional effort throughout the storage stack it takes to handle that. Whether it be bandwidth on the fabric, the protocol, or processing overhead on the HBAs, switches, or storage controllers. And let’s not forget the burden it has on the persistent media.

This variability in performance is more prominent with Flash than traditional spinning disk.  Reads are relatively easy for Flash, but the methods used for writing to NAND Flash can inhibit the same performance results from reads, especially with writes using large blocks. (For more detail on the basic anatomy and behavior of Flash, take a look at Frank Denneman’s post on Flash wear leveling, garbage collection, and write amplification. Here is another primer on the basics of Flash.)  A very small number of writes using large blocks can trigger all sorts of activity on the Flash devices that obstructs the effective performance from behaving as it does with smaller block I/O. This volatility in performance is a surprise to just about everyone when they first see it.

Block size can impact storage performance regardless of the type of storage architecture used. Whether it is a traditional SAN infrastructure, or a distributed storage solution used in a Hyper Converged environment, the factors, and the challenges remain. Storage systems may be optimized for different block size that may not necessarily align with your workloads. This could be the result of design assumptions of the storage system, or limits of their architecture.  The abilities of storage solutions to cope with certain workload patterns varies greatly as well.  The difference between a good storage system and a poor one often comes down to the abilities of it to handle large block I/O.  Insight into this information should be a part of the design and operation of any environment.

The applications that generate them
What makes the topic of block sizes so interesting are the Operating Systems, the applications, and the workloads that generate them. The block sizes are often dictated by the processes of the OS and the applications that are running in them.

Unlike what many might think, there is often a wide mix of block sizes that are being used at any given time on a single VM, and it can change dramatically by the second. These changes have profound impact on the ability for the VM and the infrastructure it lives on to deliver the I/O in a timely manner. It’s not enough to know that perhaps 30% of the blocks are 64KB in size. One must understand how they are distributed over time, and how latencies or other attributes of those blocks of various sizes relate to each other. Stay tuned for future posts that dive deeper into this topic.

Traditional methods capable of visibility
The traditional methods for viewing block sizes have been limited. They provide an incomplete picture of their impact – whether it be across the Data Center, or against a single workload.

1. Kernel statistics courtesy of vscsistats. This utility is a part of ESXi, and can be executed via the command line of an ESXi host. The utility provides a summary of block sizes for a given period of time, but suffers from a few significant problems.

  • Not ideal for anything but a very short snippet of time, against a specific vmdk.
  • Cannot present data in real-time.  It is essentially a post-processing tool.
  • Not intended to show data over time.  vscsistats will show a sum total of I/O metrics for a given period of time, but it’s of a single sample period.  It has no way to track this over time.  One must script this to create results for more than a single period of time.
  • No context.  It treats that workload (actually, just the VMDK) in isolation.  It is missing the context necessary to properly interpret.
  • No way to visually understand the data.  This requires the use of other tools to help visualize the data.

The result, especially at scale, is a very labor intensive exercise that is an incomplete solution. It is extremely rare that an Administrator runs through this exercise on even a single VM to understand their I/O characteristics.

2. Storage array. This would be a vendor specific "value add" feature that might present some simplified summary of data with regards to block sizes, but this too is an incomplete solution:

  • Not VM aware.  Since most intelligence is lost the moment storage I/O leaves a host HBA, a storage array would have no idea what block sizes were associated with a VM, or what order they were delivered in.
  • Measuring at the wrong place.  The array is simply the wrong place to measure the impact of block sizes in the first place.  Think about all of the queues storage traffic must go through before the writes are committed to the storage, and reads are fetched. (It also assumes no caching tiers outside of the storage system exist).  The desire would be to measure at a location that takes all of this into consideration; the hypervisor.  Incidentally, this is often why an array can show great performance on the array, but suffer in the observed latency of the VM.  This speaks to the importance of measuring data at the correct location. 
  • Unknown and possibly inconsistent method of measurement.  Showing any block size information is not a storage array’s primary mission, and doesn’t necessarily provide the same method of measurement as where the I/O originates (the VM, and the host it lives on). Therefore, how it is measured, and how often it is measured is generally of low importance, and not disclosed.
  • Dependent on the storage array.  If different types of storage are used in an environment, this doesn’t provide adequate coverage for all of the workloads.

The Hypervisor is an ideal control plane to analyze the data. It focuses on the results of the VMs without being dependent on nuances of in-guest metrics or a feature of a storage solution. It is inherently the ideal position in the Data Center for proper, holistic understanding of your environment.

Eyes wide shut – Storage design mistakes from the start
The flaw with many design exercises is we assume we know what our assumptions are. Let’s consider typical inputs when it comes to storage design. This includes factors such as

  • Peak IOPS and Throughput.
  • Read/Write ratios
  • RAID penalties
  • Perhaps some physical latencies of components, if we wanted to get fancy.

Most who have designed or managed environments have gone through some variation of this exercise, followed by a little math to come up with the correct blend of disks, RAID levels, and fabric to support the desired performance. Known figures are used when they are available, and the others might be filled in with assumptions.  But yet, block sizes, and everything they impact are nowhere to be found. Why? Lack of visibility, and understanding.

If we know that block sizes can dramatically impact the performance of a storage system (as will be shown in future posts) shouldn’t it be a part of any design, optimization, or troubleshooting exercise?  Of course it should.  Just as with working set sizes, lack of visibility doesn’t excuse lack of consideration.  An infrastructure only exists because of the need to run services and applications on it. Let those applications and workloads help tell you what type of storage fits your environment best. Not the other way around.

Is there a better way?
The ideal approach for measuring the impact of block sizes will always include measuring from the location of the hypervisor, as this will provide these measurements in the right way, and from the right location.  vscsiStats and vCenter related metrics are an incredible resource to tap into, and will provide the best understanding on impacts of block sizes in a storage system.  There may be some time investment to decipher block size characteristics of a workload, but the payoff is generally worth the effort.

My vSphere Home Lab. 2016 edition

Here we go again. I had no intention of writing a follow-up to my "Home Lab 2015 edition" post last year, as I didn’t foresee any changes to the lab in the coming year that would be interesting enough to write about. 

So much for predicting the future.

Sometimes Home Lab environments tend to border on vanity projects. I would like to think the recent changes in my lab were done out of need, but rationalizing wants into needs is common enough to be considered a national pastime. Nevertheless, my profession now has me testing workloads and new technologies on a daily basis, and this was a driving force behind these upgrades. Honest.

Demand often drives change. This is where the evolution of my Home Lab continues to mimic a production environment – just at a smaller scale. Budget, performance, capacity, space, and heat are all elements of a Home Lab design that are almost laughably similar to a production environment. Workloads evolve, and needs grow – quickly making previously used design inputs as inadequate. That is exactly what happened to me, and knew I had to invest in a few upgrades.

Compute – Performance/Testing Cluster
It was finally time to replace a few of the oldest components of the lab. My primary hosts that were built off of Intel Sandy Bridge processors used motherboards limited to just 32GB of RAM, and PCIe 2.0. I didn’t have any 10Gb connectivity without my old InfiniBand gear, and I was consistently pushing the CPUs to their limit.

I decided to go with a pair of SuperMIcro 5018D-FN4T rack mounted units. These are an incredibly small 1U form factor that feature built-in dual 10GbE and dual 1GbE interfaces, a dedicated IPMI port, a PCIe 3.0 slot, 4 drive bays, and can pack in up to 128GB of DDR4 memory. The motherboard uses the soldered on 8 core Xeon D-1540 chip and the power supply is built into the chassis. Both items reduce flexibility, but improve the no-brainer simplicity of the unit. What is most surprising when you get your hands on them is that they are incredibly small, yet still half empty when the case is cracked open. A third host will probably be in the works at some point, but it’s not necessary at this time.

image

It probably will come as no surprise that multiple PernixData FVP based acceleration tiers are an integral component of my infrastructure, so a few changes occurred in that realm.

1. Adding NVMe cards to use as a Flash based acceleration tier for FVP. For this lab arrangement, I used the Intel 750 NVMe based PCIe 3.0 card. While they are not officially on the VMware HCL, they are fine for the Home Lab, as they borrow heavily from the Intel DC P3xxxx line of NVMe cards that are on the VMware HCL. Intel NVMe cards are outstanding performers. Enjoy the benefits of completely bypassing all of the legacy elements the traditional storage stack on a host such as storage controllers and SCSI commands. NVMe based Flash devices is still limited by the physics of NAND Flash, but it is an incredible performer that can make any SSD based Flash drive look quite feeble in comparison. Just make sure to use Intel’s driver for vSphere.

2. More RAM to use as a DFTM acceleration tier in FVP. I placed 64GB of Micron Memory which allows me to allocate a nice chunk of RAM for FVP acceleration. The beauty of using memory as an acceleration tier avoiding all characteristics of NAND Flash, and the ability for it to leverage compression techniques. This typically increases the effective tier size between 30% and 70% depending on workload. The larger the tier size, the more content that can live in the tier, and the less eviction that occurs against the working set of data.

Compute – Management Cluster
A management cluster in a Home Lab is great. It has allowed me to really experiment with testing workloads and new technologies without any impact to the components that run the infrastructure. My Management Cluster now comprises of three Intel NUCs. I would have been perfectly happy with just a couple of NUCs as a Management cluster, but unfortunately the 16GB RAM limitation makes that a bit tough. Eventually, the NUCs will outlive their usefulness in the lab, but the great part about them is that they can easily be used as a desktop workstation, or media server. For now, they will continue to serve their purpose as a Management Cluster.

Switching
Upgrading my network meant adding 10GbE connectivity. For this, I chose a Netgear XS708E, 8 port, 10GbE switch. This would serve as a fast interconnect for east-west traffic between hosts. My adventures with InfiniBand were always interesting and educational. It’s an amazing technology, but there was just too much administrative overhead to the gear I was using. Unfortunately, there are not too many small, affordable 10GbE switches out there. The Dell 12 port X4012 10GbE switch looked really appealing based on the specs, but the ports are SFP+, so that would have meant rethinking a number of things. As for the Netgear, what do I think of it?  After configuring the product, I’m convinced the folks at Netgear wanted to punish anyone who buys the unit. All of the configuration items that should be so basic in a CLI or web based UI are obfuscated in a proprietary interface that seems to be missing half of the options you’d expect. Dear Netgear, please let me configure LAGs, trunks, MTU size, and VLANs with something remotely resembling common sense. It does work, but if I could do it over, I’d choose something else.

My network core still consists of a Cisco SG300-20 Layer 3 switch. Moving away from hosts that had 6, 1GbE ports down to hosts that had just two 1GbE ports and two 10GbE ports meant that I was able to free up space on this switch. That switch still has a bit of a premium price for a 20 port L3 switch, but it has been a rock solid component of my lab for over 4 years now.

Ancillary Components
One thing I was tired of dealing with was my wireless gateway. I’ve grown sour on any consumer based WiFi/Router solutions available. Most aren’t stable, and lack features that require one to crack them with a DD-WRT build. Memory leaks and other reboot inducing behaviors are not what you want to deal with when attempting to access the lab remotely, so it was time to take a new approach. I went with the following for my gateway and wireless needs.

Motorola SB6121 DOCSIS 3.0 Cable Modem. This was purchased to replace the oversized cable modem provided by the service provider. It’s small, affordable, and prevents the cable company from changing settings on me, as they often would with their own unit.

Ubiquiti EdgeRouter PoE. This 5 port unit serves as my gateway, where one leg feeds downstream to my core switch, and another leg is used as a DMZ for my WiFi. This is a great switch that offers everything that I was looking for. Trunking, static routes, NAT and Firewalling. The multiple PoE ports makes it easy to add new wireless access points.

Ubiquiti UniFi AP Wireless Access Point. These access points pair nicely with the PoE based router above.

It’s been a rock solid, winning combination. Always on, with no random need to reboot. Total control over configuration, and no silliness from the cable provider. Mission accomplished.

Storage
This was one of the few components that didn’t change. Storage is served up by two, 5-bay Synology units with a mix of SSDs and spinning disk. I had plenty of capacity, with enough options to test various media if needed.

Mounting
Until this latest refresh, a $25 utility rack had housed the assortment of oddly shaped lab gear pretty well. With the changeover to small 1U rackmount servers and additional switchgear, it was time for an official enclosure. I went with a Tripp Lite 9U Wall Mount Cabinet. It will eventually be wall mounted, but for the time being, sits perfectly on a $12 moving dolly from Harbor Freight. The cabinet has some nice mounting ports for supplementary exhaust fans should the need arise.

Relocation
Within the first few minutes of powering up the new hosts, I realized the arrangement was going to need a new home. Server room loud?  No. But moving from 38dB to 50+dB is loud enough that you wouldn’t want to be working by it all day. There is no way 1U fans spinning at 8,000 RPM will ever be soothing. I had been quite proud of how quiet my lab gear had been up until this point. I stayed away from 1U anything, and when with quiet fans wherever I could. I tried desperately to suppress the noise, replacing all of the fans with ultra-quiet Noctua fans. Unfortunately, ultra-quiet can also mean they don’t move much air. It’s not good to disregard any delta in CFM between fans. The heat alarms made it very clear this wasn’t going to work, and I didn’t want to burn up perfectly good gear. I chose to place all of the factory fans back in the 1U servers, and the 10GbE switch, and used the Noctua fans as supplementary fans in each device. They do help the primary fans to spin at a lower rate, so the effort wasn’t a total waste. The 9U cabinet will be relocated to a more permanent location than it is now, but for the time being, its making a coat closet nice and warm.

What it looks like
The entire lab, including the UPS is now self-contained, which should make its final relocation straight forward. The entire arrangement (5 hosts, 2 switches, 2 Synology NAS units, etc.) draws between 250 – 300 watts depending upon the load. Considering the old, much less capable arrangement ran at about 200 watts, I was pretty happy with the result.

image

In the spirit of full disclosure, the cabinet door does cover up some rather careless cable management practices. Regardless, I am thrilled with the end result and how it performs. A space efficient arrangement that is extremely powerful.

No matter how little, or how much you decide to invest in a Home Lab, I’ve learned that the satisfaction seems to be directly proportional to how much value it brings to you. Whether it be a hobby, used for professional growth, or a part of your day-to-day job duties, any sense of buyer’s remorse only seems to creep in when it’s not used. For my circumstances, that doesn’t seem to be a problem.

Working set sizes in the Data Center

There is no shortage of mysteries in the data center. These stealthy influencers can undermine performance and consistency of your environment, while remaining elusive to identify, quantify, and control. Virtualization helped expose some of this information, as it provided an ideal control plane for visibility. But it does not, and cannot properly expose all data necessary to account for these influencers. The hypervisor also has a habit of presenting the data in ways that can be misinterpreted.

One such mystery as it relates to modern day virtualized data centers is known as the "working set." This term certainly has historical meaning in the realm of computer science, but the practical definition has evolved to include other components of the Data Center; storage in particular. Many find it hard to define, let alone understand how it impacts their data center, and how to even begin measuring it.

We often focus on what we know, and what we can control. However, lack of visibility of influencing factors in the data center does not make it unimportant. Unfortunately this is how working sets are usually treated. It is often not a part of a data center design exercise because it is completely unknown. It is rarely written about for the very same reason. Ironic considering that every modern architecture deals with some concept of localization of data in order to improve performance. Cached content versus it’s persistent home. How much of it is there? How often is it accessed? All of these types of questions are critically important to know.

What is it?
For all practical purposes, a working set refers the amount of data that a process or workflow uses in a given time period. Think of it as hot, commonly accessed data of your overall persistent storage capacity. But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify. What is recent? Does "amount" mean reads, writes, or both? And does it define if it is the same data written over and over again, or is it new data? Let’s explore this more.

There are a several traits of working sets that are worth reviewing.

  • Working sets are driven by the workload, the applications driving the workload, and the VMs that they run on.  Whether the persistent storage is local, shared, or distributed, it really doesn’t matter from the perspective of how the VMs see it.  The size will be largely the same.
  • Working sets always relate to a time period.  However, it’s a continuum.  And there will be cycles in the data activity over time.
  • Working set will comprise of reads and writes.  The amount of each is important to know because reads and writes have different characteristics, and demand different things from your storage system.
  • Working set size refers to an amount, or capacity, but what and how many I/Os it took to make up that capacity will vary due to ever changing block sizes.
  • Data access type may be different.  Is one block read a thousand times, or are a thousand blocks read one time?  Are the writes mostly overwriting existing data, or is it new data?  This is part of what makes workloads so unique.
  • Working set sizes evolve and change as your workloads and data center change.  Like everything else, they are not static.

A simplified, visual interpretation of data activity that would define a working set, might look like below.

image

If a working set is always related to a period of time, then how can we ever define it? Well in fact, you can. A workload often has a period of activity followed by a period of rest. This is sometimes referred to the "duty cycle." A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code. Taking a look over a larger period of time, duty cycles of a VM might look something like below.

image

Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.

Why it matters
Determining a working set sizes helps you understand the behaviors of your workloads in order to better design, operate, and optimize your environment. For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets. Understanding and accurately calculating working sets can have a profound effect on the consistency of a data center. Have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyper-converged environment? This is because both are extremely sensitive to right sizing the caching layer. Not accurately accounting for working set sizes of the production workloads is a common reason for such issues.

Classic methods for calculation
Over the years, this mystery around working set sizes has resulted in all sorts of sad attempts at trying to calculate. Those attempts have included:

  • Calculate using known (but not very helpful) factors.  These generally comprise of looking at some measurement of IOPS over the course of a given time period.  Maybe dress it up with a few other factors to make it look neat.  This is terribly flawed, as it assumes one knows all of the various block sizes for that given workload, and that block sizes for a workload are consistent over time.  It also assumes all reads and writes use the same block size, which is also false.
  • Measure working sets defined on a storage array, as a feature of the array’s caching layer.  This attempt often fails because it sits at the wrong location.  It may know what blocks of data are commonly accessed, but there is no context to the VM or workload imparting the demand.  Most of that intelligence about the data is lost the moment the data exits the HBA of the vSphere host.  Lack of VM awareness can even make an accurately guessed cache size on an array be insufficient at times due to cache pollution from noisy neighbor VMs.
  • Take an incremental backup, and look at the amount of changed data.  This sounds logical, but this can be misleading because it will not account for data that is written over and over, nor does it account for reads.  The incremental time period of the backup may also not be representative of the duty cycle of the workload.
  • Guess work.  You might see "recommendations" that say a certain percentage of your total storage capacity used is hot data, but this is a more formal way to admit that it’s nearly impossible to determine.  Guess large enough, and the impact of being wrong will be less, but this introduces a number of technical and financial implications on data center design. 

Since working sets are collected against activity that occurs on a continuum, calculating a typical working set with a high level of precision is not only impossible, but largely unnecessary.  When attempting to determine working set size of a workload, the goal is to come to a number that reflects the most typical behavior of a single workload, group of workloads, or a total sum of workloads across a cluster or data center.

A future post will detail approaches that should give a sufficient level of understanding on active working set sizes, and help reduce the potential of negative impacts on data center operation due to poor guesswork.

Thanks for reading


Understanding PernixData FVP’s clustered read caching functionality

When PernixData debuted FVP back in August 2013, for me there was one innovation in particular that stood out above the rest.  The ability to accelerate writes (known as “Write Back” caching) on the server side, and do so in a fault tolerant way.  Leverage fast media on the server side to drive microsecond write latencies to a VM while enjoying all of the benefits of VMware clustering (vMotion, HA, DRS, etc.).  Give the VM the advantage of physics by presenting a local acknowledgement of the write, but maintain all of the benefits of keeping your compute and storage layers separate.

But sometimes overlooked with this innovation is the effectiveness that comes with how FVP clusters acceleration devices to create a pool of resources for read caching (known as “Write Through” caching with FVP). For new and existing FVP users, it is good to get familiar with the basics of how to interpret the effectiveness of clustered read caching, and how to look for opportunities to improve the results of it in an environment. For those who will be trying out the upcoming FVP Freedom edition, this will also serve as an additional primer for interpreting the metrics. Announced at Virtualization Field Day 5, the Freedom Edition is a free edition of FVP with a few limitations, such as read caching only, and a maximum of 128GB tier size using RAM.

The power of read caching done the right way
Read caching alone can sometimes be perceived as a helpful way to improve performance, but temporary, and only addressing one side of the I/O dialogue. Unfortunately, this assertion tells an incomplete story. It is often criticized, but let’s remember that caching in some form is used by almost everyone, and everything.  Storage arrays of all types, Hyper Converged solutions, and even DAS.  Dig a little deeper, and you realize its perceived shortcomings are most often attributed to how it has been implemented. By that I mean:

  • Limited, non-adjustable cache sizes in arrays or Hyper Converged environments.
  • Limited to a single host in server side solutions.  (operations like vMotion undermining its effectiveness)
  • Not VM or workload aware.

Existing solutions address some of these shortcomings, but fall short in addressing all three in order to deliver read caching in a truly effective way. FVP’s architecture address all three, giving you the agility to quickly adjust the performance tier while letting your centralized storage do what it does best; store data.

Since FVP allows you to choose the size of the acceleration tier, this impact alone can be profound. For instance, current NVMe based Flash cards are 2TB in size, and are expected to grow dramatically in the near future. Imagine a 10 node cluster that would have perhaps 20-40TB of an acceleration tier that may be serving up just 50TB of persistent storage. Compare this to a hybrid array that may only put in a few hundred GB of flash devices in an array serving up that same 50TB, and funneling through a pair of array controllers. Flash that the I/Os would still have traverse the network and storage stack to get to, and cached data that is arbitrarily evicted for new incoming hot blocks.

Unlike other host side caching solutions, FVP treats the collection of acceleration devices on each host as a pool. As workloads are being actively moved across hosts in the vSphere cluster, those workloads will still be able to fetch the cached content from that pool using a light weight protocol. Traditionally host based caching would have to re-warm the data from the backend storage using the entire storage stack and traditional protocols if something like a vMotion event occurred.

FVP is also VM aware. This means it understands the identity of each cached block – where it is coming from, and going to -  and has many ways to maintain cache coherency (See Frank Denneman’s post Solving Cache Pollution). Traditional approaches to providing a caching tier meant that they were largely unaware of who the blocks of data were associated with. Intelligence was typically lost the moment the block exits the HBA on the host. This sets up one of the most common but often overlooked scenarios in a real environment. One or more noisy neighbor VMs can easily pollute, and force eviction of hot blocks in the cache used by other VMs. The arbitrary nature of this means potentially unpredictable performance with these traditional approaches.

How it works
The logic behind FVP’s clustered read caching approach is incredibly resilient and efficient. Cached reads for a VM can be fetched from any host participating in the cluster, which allows for a seamless leveraging of cache content regardless of where the VM lives in the cluster. Frank Denneman’s post on FVP’s remote cache access describes this in great detail.

Adjusting the charts
Since we will be looking at the FVP charts to better understand the benefit of just read caching alone, let’s create a custom view. This will allow us to really focus on read I/Os and not get them confused with any other write I/O activity occurring at the same time.

image

 

Note that when you choose a "Custom Breakdown", the same colors used to represent both reads and writes in the default "Storage Type" view will now be representing ONLY reads from their respective resource type. Something to keep in mind as you toggle between the default "Storage Type" view, and this custom view.

SNAGHTML16994fda

Looking at Offload
The goal for any well designed storage system is to deliver optimal performance to the applications.  With FVP, I/Os are offloaded from the array to the acceleration tier on the server side.  Read requests will be delivered to the VMs faster, reducing latency, and speeding up your applications. 

From a financial investment perspective, let’s not forget the benefit of I/O “offload.”  Or in other words, read requests that were satisfied from the acceleration tier. Using FVP, offload from the storage arrays serving the persistent storage tier, from the array controllers, from the fabric, and the HBAs. The more offload there is, the less work for your storage arrays and fabric, which means you can target more affordable backend storage. The hero numbers showcase the sum of this offload nicely.image

Looking at Network acceleration reads
Unlike other host based solutions, FVP allows for common activities such as vMotions, DRS, and HA to work seamlessly without forcing any sort of rewarming of the cache from the backend storage. Below is an example of read I/O from 3 VMs in a production environment, and their ability to access cached reads on an acceleration device on a remote host.

SNAGHTML1bcf7ee

Note how the Latency maintains its low latency on those read requests that came from a remote acceleration device (the green line).

How good is my read caching working?
Regardless of which write policy (Write Through or Write Back) is being used in FVP, the cache is populated in the same way.

  • All read requests from the backing array will place the data into the acceleration tier as it fetches it from the backing storage.
  • All write I/O is placed in the cache as it is written to the physical storage.
    Therefore, it is easy to conclude that if read I/Os did NOT come from acceleration tier, it is from one of three reasons.
  • A block of data had been requested that had never been requested before.
  • The block of data had not been written recently, and thus, not residing in cache.
  • A block of data had once lived in the cache (via a read or write), but had been evicted due to cache size.

The first two items reflect the workload characteristics, while the last one is a result of a design decision – that being the cache size. With FVP you get to choose how large the devices are that make up the caching tier, so you can determine ultimately how much the solution will benefit you. Cache size can have a dramatic impact on performance because there is less pressure to evict previous data that have already been cached to make room for new data.

Visualizing the read cache usage
This is where the FVP metrics can tell the story. When looking at the "Custom Breakdown" view described earlier in this post, you can clearly see on the image below that while a sizable amount of reads were being serviced from the caching tier, the majority of reads (3,500+ IOPS sustained) in this time frame (1 week) came from the backing datastore.

SNAGHTML168a6c5f

Now, let’s contrast this to another environment and another workload. The image below clearly shows a large amount of data over the period of 1 day that is served from the acceleration tier. Nearly all of the read I/Os and over 60MBps of throughput that never touched the array.

SNAGHTML16897d40

When evaluating read cache sizing, this is one of the reasons why I like this particular “Custom Breakdown” view so much. Not only does it tell you how well FVP is working at offloading reads. It tells you the POTENTIAL of all reads that *could* be offloaded from the array.  You get to choose how much offload occurs, because you decide on how large your tier size is, or how many VMs participate in that tier.

Hit Rate will also tell you the percentage of reads that are coming from the acceleration tier at any point and time. This can be an effective way to few cache hit frequency, but to gain more insight, I often rely on this "Custom Breakdown" to get better context of how much data is coming from the cache and backing datastores at any point in time. Eviction rate can also provide complimentary information if it shows the eviction rate creeping upward.  But there can be cases were lower eviction percentages may evict enough cached data over time that it can still impact if it is still in cache.  Thus the reason why this particular "Custom Breakdown" is my favorite for evaluating reads.

What might be a scenario for seeing a lot of reads coming from a backing datastore, and not from cache? Imagine running 500 VMs in an acceleration tier size of just a few GB. The working set sizes are likely much larger than the cache size, and will result in churning through the cache and not show significant demonstrable benefit. Something to keep in mind if you are trying out FVP with a very small amount of RAM as an acceleration resource. Two effective ways to make this more efficient would be to 1.) increase the cache size or 2.) decrease the number of VMs participating in acceleration. Both will achieve the same thing; providing more potential cache tier size for each VM accelerated. The idea for any caching layer is to have it large enough to hold most of the active data (aka "working set") in the tier. With FVP, you get to easily adjust the tier size, or the VMs participating in it.

Don’t know what your working set sizes are?  Stay tuned for PernixData Architect!

Summary
Once you have a good plan for read caching with FVP, and arrange for a setup with maximum offload, you can drive the best performance possible from clustered read caching. On it’s own, clustered read caching implemented the way FVP does it can change the architectural discussion of how you design and spend those IT dollars.  Pair this with write-buffering with the full edition of FVP, and it can change the game completely.

Dogs, Rush hour traffic, and the history of storage I/O benchmarking–Part 1

Evaluating performance of x86 based servers and workstations has had a history of deficiency. Twenty years ago, Administrators who tested system performance usually did little more than run a simple CPU benchmark to see how much faster a 50MHz system was than a 25 MHz system. Rarely did testing go beyond this. Nostalgia aside, it really was a simpler time.

Fast forward a few years, and testing became slightly more sophisticated. Someone figured out it might be good to test the slowest part of the system (storage), so methods and tools were created to accommodate. Storage moved beyond the physical confines of the server by using dedicated LUNS in a SAN array. The LUNS may have not been shared, but the fabric, and entry points to the array were. However, testing storage generally marched forward with little change. Virtualization changed the landscape even further by changing the notion of a dedicated LUN for a single system. Now, the fabric and every component on the storage system was shared.

Testing tools came and went, with some being nothing more than orphaned side projects. Some tools have more dials to turn, but many still run under the assumption that they are testing a physical host on local spinning disk. They do little to try to emulate a real workload, as they have no idea what that means. Many times these tools try to combine load generation with a single, final number for performance measurement. Almost as if whatever happened in between the start and finish didn’t matter.

Testing methods didn’t evolve much either. The quest for “top speed” was never supplanted by any other method. Noteworthy considering a critical measurement of anything shared is performance under load or contention. Storage architectures and the media used has evolved, but rarely is it properly accounted for in testing. Often lost in the speeds and feeds discussion is the part that really counts – The performance of the applications and the VMs they live on.

This post will point out the flaws of synthetic testing of storage performance (the tools, and the techniques), but it may incorrectly give the impression that they are useless.  Quite the contrary actually.  They can be very helpful when used the correct way, and for the right reasons.  More on this later.

Deficiencies of benchmarks as a meaningful measuring stick
“I don’t use benchmarks. I have users” — Ancient Twitter Proverb

There is no substitute for the value of observing real world performance characteristics, but it does little to address the difficulty with measuring that performance in a repeatable way. Real workloads are a collection of widely moving variables that all have different types of impact on an environment and a user experience. Testing system performance is important, but only when it is properly understood what the testing tools are producing.

Synthetic benchmarks offer a number of benefits. They are typically very easy to run, and often produce some dashboard result that can be referenced later. But these tools and test methods share common characteristics that rarely generate anything resembling real data patterns. Among those distinctions are.

  • I/O generated from them is not a closed loop dialogue
  • They do not mimic dynamic variables of real workloads
  • Improper testing practices in a clustered compute environment

All of these warrant more detail, so let’s elaborate on each one.

I/O generated from them is not a closed loop dialogue
A simplified way to describe typical I/O dialogue be this; Data is fetched, it is processed in some way, then is sent on its way. The I/O “signature” of a workload could be described as to what pattern and degree this dialogue occurs in.  It can be a pattern that is often repeated frequently if you observe workloads long enough.

Consider the fetching of some data, the processing of some data, and the writing of data. One might liken this process to a dog fetching a ball, you wiping the slobber off, and throwing it out again. Over and over again, and in that order. Single threaded of course.

n1whm

Synthetic load generators attack this quite differently. The one and only job of a synthetic I/O generator is to fill up the queues as fast as possible using every CPU cycle. The I/O generator has no regard for anything. The data is not processed in any way because that is not what was asked of it. By comparison, Synthetic I/O pretty much looks like this:

n1wck

With a Synthetic I/O generator, every CPU cycle will be pushed to perform a singular action. Reads are requested and writes are issued in an all or nothing fashion. Sure, some generators allow you to mix reads and writes, but the problem still remains.  They do not reflect any meaningful dialogue, and cannot mimic a real workload.

They do not mimic dynamic variables of real workloads
Real workloads consume resources (CPU, Memory, Storage) far differently than their synthetic counterparts. At any given time, storage I/Os will be varying mixes of reads versus writes, I/O sizes, and coming from one or many CPU threads. The two images below shows a 6 1/2 minute snippet of real I/O taken from a single VM in a production environment (using vscsiStats and Excel surface charts). During this time of heavy activity, notice how much the type of I/O that is in play varies.

Below you see the number of read I/Os, and the respective I/O sizes. Typically between 4K and 32K in size.

image

Below you see the number of write I/Os, and the respective I/O sizes. The majority of sizes range from 32K to 512K in size. This is occurring at the same time on the same VM.

image

Here you can see that read/write ratios vary for just this single VM, and more importantly, the size and the number of I/Os are all over the place. I/O sizes can have an enormous impact on storage performance, so one can imagine the difficulty in emulating it accurately. VMware’s I/O Analyzer attempts to simulate courtesy of a trace file created and replayed from a real workload, but it still will not behave in the same way as multiple VMs from multiple hosts generating widely varying I/O patterns.  Analytics from storage arrays doesn’t help much either, as they are unable to see the pattern of data in this way.

Improper testing practices in a clustered compute environment
A typical Administrator who tests storage performance usually does so by setting up a single system (VM) to test peak performance (IOPS, Throughput & Latency) on a shared storage backend. It sounds logical at first, but this method doesn’t reflect the way data is handled in a clustered compute environment. Storage I/O from a clustered compute arrangement behaves in a way that is not unlike congestion on an interstate freeway. The performance of a freeway cannot be evaluated by a single car driving on it. It’s performance measurement is derived when it is under load with multiple cars, with different intentions, destinations, sizes, and all of the other variables that introduce congestion. Modern traffic simulation and modeling solutions account for all of these variables to measure and improve what matters most – real traffic.

Unfortunately, most testers and tools take this same “single car” approach, and do not account for one of the most important elements in modern virtualized infrastructures; the clustered compute layer. A fast storage infrastructure needs to be able to handle the given number of compute nodes (physical hosts) now and in the future. After all, the I/O’s are ALWAYS generated by the VMs and the collection of hosts they live on – not the backend storage. Painfully obvious, but often overlooked.

Take a look below. This is an illustration of I/O activity in a real environment (traditional clustered compute with SAN architecture). Green lines represent read I/Os and red lines representing write I/Os.

image

Now, let’s look at I/O activity from a traditional synthetic benchmarking approach. As you can see below, it looks pretty different.

image

Storage I/O generated in a real environment is a result of the number of nodes in a cluster and the workloads running on them. So a better (but far from perfect) way to test, at the very minimum, would be:

image

In a traditional architecture with an array that exceeds the capabilities of I/O generated from a single host, this is the method most commonly used to measure the absolute high water mark numbers of that array. Typically storage manufactures quote the numbers from the array because it is a single point of measurement that fits nicely in Marketing materials. Unfortunately it doesn’t measure what really counts; the reported numbers as seen by the VMs – a fact that must not be forgotten when performing any sort of performance testing. The array of course always hits a limit at some point due to the characteristics of the array, or the fabric it has to traverse.

In any sort of clustered compute system, you cannot recognize the full power of the compute platform by testing off of one VM. The same thing goes for any type of distributed storage architecture. With clustered host based acceleration solutions like PernixData FVP, or even Hyper Converged solutions, the approach will have to be similar to above in order to measure correctly. These are different architectures that reshape the traditional data path, and with the testing recommendations above, should help in evaluating their performance. This approach also puts the focus back where it should be; the performance of the VMs, and not some irrelevant numbers from a physical storage array.

 

Proper simulation of I/O across all hosts will allow you to adequately factor in the performance of the storage fabric and all connection points. Most fabrics are quite fast when there isn’t any traffic on them. Unfortunately, that isn’t very realistic. It is important to understand the impact the fabric introduces as the environment is scaled. Since the fabric is what connects all of the hosts fetching and committing data to a storage array, we need to simulate how everything (HBAs, storage array controllers, switches, etc.) performs under contention. If you haven’t already done so, take a few moments to read one of my favorite posts from Frank Denneman, Data Path is not managed as a clustered resource.

Testing with multiple VMs on multiple hosts with FVP also allows you to takes advantage of the per VM acceleration (write buffering & read caching) capabilities across a clustered compute environment. It is one of the reasons why the FVP’s decoupled architecture can scale so well, and why real workloads become a such a beneficiary of the architecture.

You thought we were finished, didn’t you
There are just too many ways Synthetic Benchmarks are misused to cover in just one post. Stay tuned for Part 2 for more observations on why they are inadequate as a single test for modern environments, and most importantly, when and how they can actually be useful.