Solarize your Home Lab, and your Home

A notorious trait of vSphere Home Labs is that they start out simple and modest, then evolve into something looking like a small Data Center. As the Home Lab grows in size and sophistication, eventually elements such as power, cooling, and noise can become a problem. IT folks are typically technology geeks at heart, so the first logical step at addressing a problem introduced by one technology is to… well, tackle it with another technology. This post isn’t necessarily about my Home Lab, but how I’ve chosen to power my home where the lab runs. That would be by the use of a residential solar system. A few have asked for me to provide some information on the setup, so here it is.

My interest in solar goes back as far as I can remember. As a young boy I watched my father build four 4’x8′ panels filled with copper tubing to supplement the natural gas furnace providing hot water. It turns out that wasn’t his first adventure with solar. Growing up in the plains of the Midwest during the heart of the Great Depression in the 1930s, he cobbled together what was a crude sort of solar system so his family could have a hot water to a shower outside. I marveled at his ingenuity.

Basics of a modern residential solar system
Residential solar systems typically consist of a collection of panels housing Photovoltaic (PV) cells, connected in series, generating DC current. Each panel has a wattage rating. My panels; 20 in total, are made by Itek Energy, and rated at 280 Watts per panel. Multiplied by 20, this gives a potential to generate 5.6kW in direct sunlight, and optimal orientation. Most PV solar is inherently DC, so this needs to be converted to AC via inverter. Converting DC to AC or vice versa usually has some cost on efficiency. Considering that most electronic devices are DC, and have a transformer of their own, this is a humorous reminder that Thomas Edison and Nikola Tesla are still battling it out after all these years.

Typically solar panels are mounted on a generally south facing side to collect as much direct sunlight as possible. Ideally, the panels would always be perpendicular to the orientation and angle to the sun. With fixed mounting on a roof, this just isn’t going to happen. But fixed mounting in non-ideal situations can still yield good results. For instance, even though my roof has a far from perfect orientation, the results are impressive. An azimuth of 180 degrees would be true South, and considered ideal in the northern hemisphere. My azimuth is 250 degrees (on a 6:12 pitch roof), meaning that it is facing 70 degrees westward from being ideal. However, my 5.6kW solar system peaks out at around 5.2kW, and catches more afternoon light than morning light, often better in areas that may have morning fog or marine air. This less than perfect orientation is estimated to only have a 10% reduction of the total production output of the panels over the course of a year. The 400 Watt shortage from it’s rated maximum is the result of loss from the inverter transitioning it over to AC, as well as some loss to atmospheric conditions.

Sizing of residential solar systems is often the result of the following design requirements and constraints. 1.) How many panels can fit on a roof with ideal orientation. 2.) What is your average electricity usage per day (in kWh), and 3.) What state incentives would make for the ideal size of a system.

The good news is that sizing a system, and estimating the capabilities is far more sophisticated than just guesswork. Any qualified installer will be able to run the numbers for your arrangement and give you full ROI estimates. The National Energy Renewable Laboratory (NREL) has a site that allows you to plug in all of your variables, and also factors in local weather data to provide detailed analysis of a proposed environment.

Grid tied versus Battery backed
Many, if not most residential solar installations these days are grid tied systems. This means that the solar supplements your power from the grid in such a way that the needs of the home will consume the power from the panels, and if there is an overabundance of power generated from the panels, it will feed this back into the grid, and bill your power provider. This is called "net metering" and provides an experience that is seamless to the consumer. One would want to be a bit careful as to not oversize grid tied systems, because some power providers may have caps on net metering and how much they pay you for electricity generated.

A residential solar system may also be battery backed. The benefit to this of course would be full independence from getting power from the grid. However, this introduces capital and operational costs not associated with grid tied systems. The system may have to be sized larger to ensure adequate power on those days where the panels don’t have the ability to generate as much electricity as you hoped for. Battery backed systems may or may not be eligible for some of the subsidies in your area. Grid tied systems prevent the need for one to have this infrastructure, and in many ways, can be thought of as the battery backup to your home when your solar power is not generating enough electricity.

How to know how well it is working
Thanks to modern technology, monitoring solutions can give you full visibility into the performance of your solar panels. My system uses an eGauge Systems Data logger. Since most of my career in IT has involved interpreting graphs and performance data in an attempt to understand systems better, monitoring the system has been one of the more entertaining aspects of the process. One can easily see via Web interface, how much load is being drawn by activities in your home, and how much power is being generated by the solar. The eGauge solution offers quick and easy access to monitoring of the environment via mobile devices, or web browser. Entering in all of your variables will also help it determine how much money you are saving for any given period of time. As the image shows below, it is easy to see how much load the home is consuming (the red fill, how much the solar system is generating (green fill), and how it is either offsetting the load, or feeding back power into the grid system.

Below is a view of a 6 hour window of time. The data is extremely granular; collected and rendered once per second.

6hour

The view below is for a 24 hour period. As you can see from the figures, a sunny day in May produces over 35kWh per day

day

The image below is a view over a one week period. You can certainly see the influence of cloudy days. As one changes the time period, the UI automatically calculates what you are saving (excluding State production incentives)

week

In case you are curious, my 6 node vSphere home lab is on 24×7, and consumes between 250 and 300 Watts (6.5kWh per day), so that is some of what contributes to the continuous line of red, even when there isn’t much going on in the house.

Economics of Solar
It is an understatement to say that the economics of residential solar varies widely. Geographic location, roof orientation, roof pitch, surface area, weather patterns, federal incentives, state incentives, and electricity rates all play a part in the equation of economic viability. Let’s not forget that much like recycling, or buying a hybrid vehicle, some people do it for emotional reasons as well. In other words, it might make them feel good, regardless if it is a silly financial decision or not. That is not what was driving me, but it would be naive to overlook that this influences people. Incentives typically fall into three categories.

  • Federal incentives. This currently is a 30% rebate at the end of the year on your up-front cost of the entire system.
  • State Incentives. Some States include some form of a production incentive program. This means that for every kWh of energy produced (whether you use it or not), you may receive a payment for the amount produced. This can be at some pre negotiated rate that is quite lucrative. Production incentives in the State of Washington can go as high as 54 cents per kWh, but may have limited terms. State incentives also may include waiving sales tax on all equipment produced in the state.
  • Power provider incentives. This comes in the form of Net metering, and simply charge the power company for every kWh that you produce, but do not use. This is often at a rate equal to what they charge you for power. (e.g. 10 cents per kWh).
    Realistically, the State and power provider incentives are heavily tied to each other, as power companies are a heavily regulated State entity.

Usually it is the State incentives or high power rates in a State are what make solar economically viable. These incentives can make an investment like this have a break-even period that is very reasonable. If there are no State incentives, and you have dirt cheap power from the grid, then it becomes a much tougher sell. This is often where battery backed systems with cheaper Chinese manufactured panels come into play. It is a rapidly changing industry, and depends heavily on legislation in each State. Is solar right for you? It depends on many of the conditions stated above. It’s really best to check with a local installer who can help you determine if it is or not. I used Northwest Wind & Solar to help work through that process, as well as installation of the system.

Observations in production
Now that things have been up and running for a while, there are a few noteworthy observations worth sharing:

  • The actual performance of solar varies widely. Diffused sunlight, or just daylight will certainly generate power, but it may be only 10 % to 20% of potential of the panel. This is one of the reason why power generated can fluctuate so widely.
  • Solar requires a lot of surface area. This was no surprise to me because of my past experience buying small, deck of card sized panels from Radio Shack in my youth. Each of my 20, Itek panels measure out at 3’x5′ per panel and produce 280W in theoretically ideal conditions. Depending on your average daily consumption of energy, you might need between 15 and 40 panels just to accommodate energy utilization rates. Because of this need for a large surface area, incorporating solar into objects such as vehicles is gimmickry at best (yes, I’m talking to you Toyota) and plays into emotions more than it does providing any practical benefit.
  • Monitoring of your power is pretty powerful. Aside from the cool factor of the software that allows you to see how much energy is generated, you also quickly see the realities of some items in your household. Filling a house full of LEDs might reduce your energy consumption and make you feel good along the way, but a few extra loads in of laundry in the dryer, or a bit trigger happy with the A/C unit in your home will quickly offset those savings.
  • Often a crystal clear sunny day does not yield the highest wattage of power generation. The highest peak output comes on partly sunny days. I suspect the reason is that there is less interference in the atmosphere in those partly sunny days. For me, those partly sunny days that may peak the power generation of my system at 5.25kW, will often be only about 4.6kW at its highest on what would be thought of as a crystal clear blue sky day.

Determining whether or not to invest in residential solar is really no different than making a smart design decision in the Data Center. Use data, and not emotions to drive the decision making, then follow that up with real data analysis to determine its success. This approach helps avoid the "trust us, it’s great!" approach found all too often in the IT industry and beyond.

Viewing the impact of block sizes with PernixData Architect

In the post Understanding block sizes in a virtualized environment, I describe what block sizes are as they relate to storage I/O, and how it became one of the most overlooked metrics of virtualized environments.  The first step in recognizing their importance is providing visibility to them across the entire Data Center, but with enough granularity to view individual workloads.  However, visibility into a metric like block sizes isn’t enough. The data itself has little value if it cannot be interpreted correctly. The data must be:

  • Easy to access
  • Easy to interpret
  • Accurate
  • Easy to understand how it relates to other relevant metrics
    Future posts will cover specific scenarios detailing how this information can be used to better understand, and tune your environment for better application performance. Let’s first learn how PernixData Architect presents block size when looking at very basic, but common read/write activity.

Block size frequencies at the Summary View
When looking at a particular VM, the "Overview" view in Architect can be used to show the frequency of I/O sizes across the spectrum of small blocks to large blocks for any time period you wish. This I/O frequency will show up on the main Summary page when viewing across the entire cluster. The image below focuses on just a single VM.

(Click on images for a full size view)

vmpete-summary

What is most interesting about the image above is that the frequency of block sizes, based on reads and writes are different. This is a common characteristic that has largely gone unnoticed because there has been no way to easily view that data in the first place.

Block sizes using a "Workload" View
The "Workload" view in Architect presents a distribution of block sizes in a percentage form as they occur on a single workload, a group of workloads, or across an entire vSphere cluster. The time frame can be any period that you wish. This view tells more clearly and quickly than any other view as to how complex, unique, and dynamic the distribution of block sizes are for any given VM. The example below represents a single workload, across a 15 minute period of time. Much like read/write ratios, or other metrics like CPU utilization, it’s important to understand these changes as they occur, and not just a single summation or percentage over a long period of time.

vmpete-workloadview

When viewing the "Workload" view in any real world environment, it will instantly provide new perspective on the limitations of Synthetic I/O generators. Their general lack of ability to emulate the very complex distribution of block sizes in your own environment limit their value for that purpose. The "Workload" view also shows how dramatic, and continuous the changes in workloads can be. This speaks volumes as to why one-time storage assessments are not enough. Nobody treats their CPU or memory resources in that way. Why would we limit ourselves that way for storage?

Keep in mind that the "Workload" view illustrates this distribution in a percentage form. Views that are percentage based aim to illustrate proportion relative to a whole. They do not show absolute values behind those percentages. The distribution could represent 50 IOPS, or 5,000 IOPS. However, this type of view can be an incredibly effective in identifying subtle changes in a workload or across an environment for short term analysis, or long term trending.

Block sizes in an IOPS and Throughput view
First let’s take a look at this VM by looking at IOPS, based on a default "Read/Write" breakdown. The image below shows a series of reads before a series of mostly writes. When you look back at the "Workload" view above, you can see how these I/Os were represented by block size distribution.

vmpete-IOPSrw

Staying on this IOPS view and selecting the predefined "Block Size" breakdown, we can see the absolute numbers that are occurring based on block size. The image below shows that unlike the "Workload" view above, this shows the actual number of I/Os issued for the given block size.

vmpete-IOPSblocksize

But that doesn’t tell the whole story. A block size is an attribute for a single I/O. So in an IOPS view 10 IOPS of 4K blocks looks the same as 10 IOPS of 256K blocks. In reality, the latter is 64 times the amount of data. The way to view this from a "payload amount transmitted" perspective is using the Throughput view with the "Block Size" breakdown, as shown below.

vmpete-TPblocksize

When viewing block size by its payload size (Throughput) as shown above, it provides a much better representation of the dominance of large block sizes, and the relatively small payload of the smaller block sizes.

Here is another way Architect can help you visualize this data. We can click on the "Performance Grid" view and change the view so that we have IOPS and Throughput but for specific block sizes. As the image below illustrates, the top row shows IOPS and Throughput for 4K to <8K blocks, while the bottom row shows IOPS and Throughput for blocks over 256K in size.

vmpete-performancegrid-IOPSTP

What the image above shows is that while the number of IOPS for block sizes in the 4K to <8K range at it’s peak were similar to the number of IOPS for block sizes of 256K and above, there was an enormous amount of payload delivered.

Why does it matter?
Let’s let PernixData Architect tell us why all of this matters. We will look at the effective latency of the VM over that same time period. We can see from the image below that the effective latency of the VM definitely increased as it transitioned to predominately writes. (Read/Write boxes unticked for clarity).

vmpete-Latencyrw

Now, let’s look at the image below, which shows latency by block size using the "Block Size" breakdown.

vmpete-Latency-blocksize

There you see it. Latency was by in large a result of the larger block sizes. The flexibility of these views can take an otherwise innocent looking latency metric and tell you what was contributing most to that latency.

Now let’s take it a step further. With Architect, the "Block Size" breakdown is a predefined view that shows block size characteristic of both reads, and writes combined – whether you are looking at Latency, IOPS, or Throughput. However, you can use a custom breakdown to not only show block sizes, but show them specifically for reads or writes, as shown in the image below.

vmpete-Latency-blocksize-custom

The "Custom" Breakdown for the Latency view shown above had all of the reads and writes of individual block sizes enabled, but some of them were simply "unticked" for clarity. This view confirms that the majority of latency was the result of writes that were 64K and above. In this case, we can clearly demonstrate that latency seen by the VM was the result of larger block sizes issued by writes. It’s impact however is not limited to just the higher latency of those larger blocks, as those large block latencies can impact the smaller block I/Os as well. Stay tuned for more information on that subject.

As shown in the image below, Architect also allows you to simply click on a single point, and drill in for more insight. This can be done on a per VM basis, or across the entire cluster. By hovering over each vertical bar representing various block sizes, it will tell you how many IOs were issued at that time, and the corresponding latency.

vmpete-Latency-Drilldown

Flash to the rescue?
It’s pretty clear that block size can have significant impact on the latency your applications see. Flash to the rescue, right?  Well, not exactly. All of the examples above come from VMs running on Flash. Flash, and how it is implemented in a storage solution is part of what makes this so interesting, and so impactful to the performance of your VMs. We also know that the storage media is just one component of your storage infrastructure. These components, and their abilities to hinder performance, exist regardless if one is using a traditional three-tier architecture, or distributed storage architectures like Hyper Converged environments.

Block sizes in the Performance Matrix
One unique view in Architect is the Performance Matrix. Unique in what it presents, and how it can be used. Your storage solution might have been optimized from the Manufacturer based on certain assumptions that may not align with your workloads. Typically there is no way of knowing that. As shown below, Architect can help you understand what type of workload characteristics in which the array begins to suffer.

vmpete-PeformanceMatrix

The Performance Matrix can be viewed on a per VM basis (as shown above) or in an aggregate form. It’s a great view to see what block size thresholds your storage infrastructure may be suffering, as the VMs see it. This is very different than statistics provided by an array, as Architect offers a complete, end-to-end understanding of these metrics with extraordinary granularity. Arrays are not in the correct place to accurately understand, or measure this type of data.

Summary
Block sizes have a profound impact on the performance of your VMs, and is a metric that should be treated as a first class citizen just like compute and other storage metrics. The stakes are far too high to leave this up to speculation, or words from a vendor that say little more than "Our solution is fast. Trust us.Architect leverages it’s visibility of block sizes in ways that have never been possible. It takes advantage of this visibility to help you translate what it is, to what it means for your environment.

Understanding block sizes in a virtualized environment

Cracking the mysteries of the Data Center is a bit like space exploration. You think you understand what everything is, and how it all works together, but struggle to understand where fact and speculation intersect. The topic of block sizes, as they relate to storage infrastructures is one such mystery. The term being familiar to some, but elusive enough to remain uncertain as to what it is, or why it matters.

This inconspicuous, but all too important characteristic of storage I/O has often been misunderstood (if not completely overlooked) by well-intentioned Administrators attempting to design, optimize, or troubleshoot storage performance. Much like the topic of Working Set Sizes, block sizes are not of great concern to an Administrator or Architect because of this lack of visibility and understanding. Sadly, myth turns into conventional wisdom – in not only what is typical in an environment, but how applications and storage systems behave, and how to design, optimize, and troubleshoot for such conditions.

Let’s step through this process to better understand what a block is, and why it is so important to understand it’s impact on the Data Center.

What is it?
Without diving deeper than necessary, a block is simply a chunk of data. In the context of storage I/O, it would be a unit in a data stream; a read or a write from a single I/O operation. Block size refers the payload size of a single unit. We can blame a bit of this confusion on what a block is by a bit of overlap in industry nomenclature. Commonly used terms like blocks sizes, cluster sizes, pages, latency, etc. may be used in disparate conversations, but what is being referred to, how it is measured, and by whom may often vary. Within the context of discussing file systems, storage media characteristics, hypervisors, or Operating Systems, these terms are used interchangeably, but do not have universal meaning.

Most who are responsible for Data Center design and operation know the term as an asterisk on a performance specification sheet of a storage system, or a configuration setting in a synthetic I/O generator. Performance specifications on a storage system are often the result of a synthetic test using the most favorable block size (often 4K or smaller) for an array to maximize the number of IOPS that an array can service. Synthetic I/O generators typically allow one to set this, but users often have no idea what the distribution of block sizes are across their workloads, or if it is even possibly to simulate that with synthetic I/O. The reality is that many applications draw a unique mix of block sizes at any given time, depending on the activity.

I first wrote about the impact of block sizes back in 2013 when introducing FVP into my production environment at the time. (See section "The IOPS, Throughput & Latency relationship")  FVP provided a tertiary glimpse of the impact of block sizes in my environment. Countless hours with the performance graphs, and using vscsistats provided new insight about those workloads, and the environment in which they ran. However, neither tool was necessarily built for real time analysis or long term trending of block sizes for a single VM, or across the Data Center. I had always wished for an easier way.

Why does it matter?
The best way to think of block sizes is how much of a storage payload consisting in a single unit.  The physics of it becomes obvious when you think about the size of a 4KB payload, versus a 256KB payload, or even a 512KB payload. Since we refer to them as a block, let’s use a square to represent their relative capacities.

image

Throughput is the result of IOPS, and the block size for each I/O being sent or received. It’s not just the fact that a 256KB block has 64 times the amount of data that a 4K block has, it is the amount of additional effort throughout the storage stack it takes to handle that. Whether it be bandwidth on the fabric, the protocol, or processing overhead on the HBAs, switches, or storage controllers. And let’s not forget the burden it has on the persistent media.

This variability in performance is more prominent with Flash than traditional spinning disk.  Reads are relatively easy for Flash, but the methods used for writing to NAND Flash can inhibit the same performance results from reads, especially with writes using large blocks. (For more detail on the basic anatomy and behavior of Flash, take a look at Frank Denneman’s post on Flash wear leveling, garbage collection, and write amplification. Here is another primer on the basics of Flash.)  A very small number of writes using large blocks can trigger all sorts of activity on the Flash devices that obstructs the effective performance from behaving as it does with smaller block I/O. This volatility in performance is a surprise to just about everyone when they first see it.

Block size can impact storage performance regardless of the type of storage architecture used. Whether it is a traditional SAN infrastructure, or a distributed storage solution used in a Hyper Converged environment, the factors, and the challenges remain. Storage systems may be optimized for different block size that may not necessarily align with your workloads. This could be the result of design assumptions of the storage system, or limits of their architecture.  The abilities of storage solutions to cope with certain workload patterns varies greatly as well.  The difference between a good storage system and a poor one often comes down to the abilities of it to handle large block I/O.  Insight into this information should be a part of the design and operation of any environment.

The applications that generate them
What makes the topic of block sizes so interesting are the Operating Systems, the applications, and the workloads that generate them. The block sizes are often dictated by the processes of the OS and the applications that are running in them.

Unlike what many might think, there is often a wide mix of block sizes that are being used at any given time on a single VM, and it can change dramatically by the second. These changes have profound impact on the ability for the VM and the infrastructure it lives on to deliver the I/O in a timely manner. It’s not enough to know that perhaps 30% of the blocks are 64KB in size. One must understand how they are distributed over time, and how latencies or other attributes of those blocks of various sizes relate to each other. Stay tuned for future posts that dive deeper into this topic.

Traditional methods capable of visibility
The traditional methods for viewing block sizes have been limited. They provide an incomplete picture of their impact – whether it be across the Data Center, or against a single workload.

1. Kernel statistics courtesy of vscsistats. This utility is a part of ESXi, and can be executed via the command line of an ESXi host. The utility provides a summary of block sizes for a given period of time, but suffers from a few significant problems.

  • Not ideal for anything but a very short snippet of time, against a specific vmdk.
  • Cannot present data in real-time.  It is essentially a post-processing tool.
  • Not intended to show data over time.  vscsistats will show a sum total of I/O metrics for a given period of time, but it’s of a single sample period.  It has no way to track this over time.  One must script this to create results for more than a single period of time.
  • No context.  It treats that workload (actually, just the VMDK) in isolation.  It is missing the context necessary to properly interpret.
  • No way to visually understand the data.  This requires the use of other tools to help visualize the data.

The result, especially at scale, is a very labor intensive exercise that is an incomplete solution. It is extremely rare that an Administrator runs through this exercise on even a single VM to understand their I/O characteristics.

2. Storage array. This would be a vendor specific "value add" feature that might present some simplified summary of data with regards to block sizes, but this too is an incomplete solution:

  • Not VM aware.  Since most intelligence is lost the moment storage I/O leaves a host HBA, a storage array would have no idea what block sizes were associated with a VM, or what order they were delivered in.
  • Measuring at the wrong place.  The array is simply the wrong place to measure the impact of block sizes in the first place.  Think about all of the queues storage traffic must go through before the writes are committed to the storage, and reads are fetched. (It also assumes no caching tiers outside of the storage system exist).  The desire would be to measure at a location that takes all of this into consideration; the hypervisor.  Incidentally, this is often why an array can show great performance on the array, but suffer in the observed latency of the VM.  This speaks to the importance of measuring data at the correct location. 
  • Unknown and possibly inconsistent method of measurement.  Showing any block size information is not a storage array’s primary mission, and doesn’t necessarily provide the same method of measurement as where the I/O originates (the VM, and the host it lives on). Therefore, how it is measured, and how often it is measured is generally of low importance, and not disclosed.
  • Dependent on the storage array.  If different types of storage are used in an environment, this doesn’t provide adequate coverage for all of the workloads.

The Hypervisor is an ideal control plane to analyze the data. It focuses on the results of the VMs without being dependent on nuances of in-guest metrics or a feature of a storage solution. It is inherently the ideal position in the Data Center for proper, holistic understanding of your environment.

Eyes wide shut – Storage design mistakes from the start
The flaw with many design exercises is we assume we know what our assumptions are. Let’s consider typical inputs when it comes to storage design. This includes factors such as

  • Peak IOPS and Throughput.
  • Read/Write ratios
  • RAID penalties
  • Perhaps some physical latencies of components, if we wanted to get fancy.

Most who have designed or managed environments have gone through some variation of this exercise, followed by a little math to come up with the correct blend of disks, RAID levels, and fabric to support the desired performance. Known figures are used when they are available, and the others might be filled in with assumptions.  But yet, block sizes, and everything they impact are nowhere to be found. Why? Lack of visibility, and understanding.

If we know that block sizes can dramatically impact the performance of a storage system (as will be shown in future posts) shouldn’t it be a part of any design, optimization, or troubleshooting exercise?  Of course it should.  Just as with working set sizes, lack of visibility doesn’t excuse lack of consideration.  An infrastructure only exists because of the need to run services and applications on it. Let those applications and workloads help tell you what type of storage fits your environment best. Not the other way around.

Is there a better way?
The ideal approach for measuring the impact of block sizes will always include measuring from the location of the hypervisor, as this will provide these measurements in the right way, and from the right location.  vscsiStats and vCenter related metrics are an incredible resource to tap into, and will provide the best understanding on impacts of block sizes in a storage system.  There may be some time investment to decipher block size characteristics of a workload, but the payoff is generally worth the effort.

My vSphere Home Lab. 2016 edition

Here we go again. I had no intention of writing a follow-up to my "Home Lab 2015 edition" post last year, as I didn’t foresee any changes to the lab in the coming year that would be interesting enough to write about. 

So much for predicting the future.

Sometimes Home Lab environments tend to border on vanity projects. I would like to think the recent changes in my lab were done out of need, but rationalizing wants into needs is common enough to be considered a national pastime. Nevertheless, my profession now has me testing workloads and new technologies on a daily basis, and this was a driving force behind these upgrades. Honest.

Demand often drives change. This is where the evolution of my Home Lab continues to mimic a production environment – just at a smaller scale. Budget, performance, capacity, space, and heat are all elements of a Home Lab design that are almost laughably similar to a production environment. Workloads evolve, and needs grow – quickly making previously used design inputs as inadequate. That is exactly what happened to me, and knew I had to invest in a few upgrades.

Compute – Performance/Testing Cluster
It was finally time to replace a few of the oldest components of the lab. My primary hosts that were built off of Intel Sandy Bridge processors used motherboards limited to just 32GB of RAM, and PCIe 2.0. I didn’t have any 10Gb connectivity without my old InfiniBand gear, and I was consistently pushing the CPUs to their limit.

I decided to go with a pair of SuperMIcro 5018D-FN4T rack mounted units. These are an incredibly small 1U form factor that feature built-in dual 10GbE and dual 1GbE interfaces, a dedicated IPMI port, a PCIe 3.0 slot, 4 drive bays, and can pack in up to 128GB of DDR4 memory. The motherboard uses the soldered on 8 core Xeon D-1540 chip and the power supply is built into the chassis. Both items reduce flexibility, but improve the no-brainer simplicity of the unit. What is most surprising when you get your hands on them is that they are incredibly small, yet still half empty when the case is cracked open. A third host will probably be in the works at some point, but it’s not necessary at this time.

image

It probably will come as no surprise that multiple PernixData FVP based acceleration tiers are an integral component of my infrastructure, so a few changes occurred in that realm.

1. Adding NVMe cards to use as a Flash based acceleration tier for FVP. For this lab arrangement, I used the Intel 750 NVMe based PCIe 3.0 card. While they are not officially on the VMware HCL, they are fine for the Home Lab, as they borrow heavily from the Intel DC P3xxxx line of NVMe cards that are on the VMware HCL. Intel NVMe cards are outstanding performers. Enjoy the benefits of completely bypassing all of the legacy elements the traditional storage stack on a host such as storage controllers and SCSI commands. NVMe based Flash devices is still limited by the physics of NAND Flash, but it is an incredible performer that can make any SSD based Flash drive look quite feeble in comparison. Just make sure to use Intel’s driver for vSphere.

2. More RAM to use as a DFTM acceleration tier in FVP. I placed 64GB of Micron Memory which allows me to allocate a nice chunk of RAM for FVP acceleration. The beauty of using memory as an acceleration tier avoiding all characteristics of NAND Flash, and the ability for it to leverage compression techniques. This typically increases the effective tier size between 30% and 70% depending on workload. The larger the tier size, the more content that can live in the tier, and the less eviction that occurs against the working set of data.

Compute – Management Cluster
A management cluster in a Home Lab is great. It has allowed me to really experiment with testing workloads and new technologies without any impact to the components that run the infrastructure. My Management Cluster now comprises of three Intel NUCs. I would have been perfectly happy with just a couple of NUCs as a Management cluster, but unfortunately the 16GB RAM limitation makes that a bit tough. Eventually, the NUCs will outlive their usefulness in the lab, but the great part about them is that they can easily be used as a desktop workstation, or media server. For now, they will continue to serve their purpose as a Management Cluster.

Switching
Upgrading my network meant adding 10GbE connectivity. For this, I chose a Netgear XS708E, 8 port, 10GbE switch. This would serve as a fast interconnect for east-west traffic between hosts. My adventures with InfiniBand were always interesting and educational. It’s an amazing technology, but there was just too much administrative overhead to the gear I was using. Unfortunately, there are not too many small, affordable 10GbE switches out there. The Dell 12 port X4012 10GbE switch looked really appealing based on the specs, but the ports are SFP+, so that would have meant rethinking a number of things. As for the Netgear, what do I think of it?  After configuring the product, I’m convinced the folks at Netgear wanted to punish anyone who buys the unit. All of the configuration items that should be so basic in a CLI or web based UI are obfuscated in a proprietary interface that seems to be missing half of the options you’d expect. Dear Netgear, please let me configure LAGs, trunks, MTU size, and VLANs with something remotely resembling common sense. It does work, but if I could do it over, I’d choose something else.

My network core still consists of a Cisco SG300-20 Layer 3 switch. Moving away from hosts that had 6, 1GbE ports down to hosts that had just two 1GbE ports and two 10GbE ports meant that I was able to free up space on this switch. That switch still has a bit of a premium price for a 20 port L3 switch, but it has been a rock solid component of my lab for over 4 years now.

Ancillary Components
One thing I was tired of dealing with was my wireless gateway. I’ve grown sour on any consumer based WiFi/Router solutions available. Most aren’t stable, and lack features that require one to crack them with a DD-WRT build. Memory leaks and other reboot inducing behaviors are not what you want to deal with when attempting to access the lab remotely, so it was time to take a new approach. I went with the following for my gateway and wireless needs.

Motorola SB6121 DOCSIS 3.0 Cable Modem. This was purchased to replace the oversized cable modem provided by the service provider. It’s small, affordable, and prevents the cable company from changing settings on me, as they often would with their own unit.

Ubiquiti EdgeRouter PoE. This 5 port unit serves as my gateway, where one leg feeds downstream to my core switch, and another leg is used as a DMZ for my WiFi. This is a great switch that offers everything that I was looking for. Trunking, static routes, NAT and Firewalling. The multiple PoE ports makes it easy to add new wireless access points.

Ubiquiti UniFi AP Wireless Access Point. These access points pair nicely with the PoE based router above.

It’s been a rock solid, winning combination. Always on, with no random need to reboot. Total control over configuration, and no silliness from the cable provider. Mission accomplished.

Storage
This was one of the few components that didn’t change. Storage is served up by two, 5-bay Synology units with a mix of SSDs and spinning disk. I had plenty of capacity, with enough options to test various media if needed.

Mounting
Until this latest refresh, a $25 utility rack had housed the assortment of oddly shaped lab gear pretty well. With the changeover to small 1U rackmount servers and additional switchgear, it was time for an official enclosure. I went with a Tripp Lite 9U Wall Mount Cabinet. It will eventually be wall mounted, but for the time being, sits perfectly on a $12 moving dolly from Harbor Freight. The cabinet has some nice mounting ports for supplementary exhaust fans should the need arise.

Relocation
Within the first few minutes of powering up the new hosts, I realized the arrangement was going to need a new home. Server room loud?  No. But moving from 38dB to 50+dB is loud enough that you wouldn’t want to be working by it all day. There is no way 1U fans spinning at 8,000 RPM will ever be soothing. I had been quite proud of how quiet my lab gear had been up until this point. I stayed away from 1U anything, and when with quiet fans wherever I could. I tried desperately to suppress the noise, replacing all of the fans with ultra-quiet Noctua fans. Unfortunately, ultra-quiet can also mean they don’t move much air. It’s not good to disregard any delta in CFM between fans. The heat alarms made it very clear this wasn’t going to work, and I didn’t want to burn up perfectly good gear. I chose to place all of the factory fans back in the 1U servers, and the 10GbE switch, and used the Noctua fans as supplementary fans in each device. They do help the primary fans to spin at a lower rate, so the effort wasn’t a total waste. The 9U cabinet will be relocated to a more permanent location than it is now, but for the time being, its making a coat closet nice and warm.

What it looks like
The entire lab, including the UPS is now self-contained, which should make its final relocation straight forward. The entire arrangement (5 hosts, 2 switches, 2 Synology NAS units, etc.) draws between 250 – 300 watts depending upon the load. Considering the old, much less capable arrangement ran at about 200 watts, I was pretty happy with the result.

image

In the spirit of full disclosure, the cabinet door does cover up some rather careless cable management practices. Regardless, I am thrilled with the end result and how it performs. A space efficient arrangement that is extremely powerful.

No matter how little, or how much you decide to invest in a Home Lab, I’ve learned that the satisfaction seems to be directly proportional to how much value it brings to you. Whether it be a hobby, used for professional growth, or a part of your day-to-day job duties, any sense of buyer’s remorse only seems to creep in when it’s not used. For my circumstances, that doesn’t seem to be a problem.

Working set sizes in the Data Center

There is no shortage of mysteries in the data center. These stealthy influencers can undermine performance and consistency of your environment, while remaining elusive to identify, quantify, and control. Virtualization helped expose some of this information, as it provided an ideal control plane for visibility. But it does not, and cannot properly expose all data necessary to account for these influencers. The hypervisor also has a habit of presenting the data in ways that can be misinterpreted.

One such mystery as it relates to modern day virtualized data centers is known as the "working set." This term certainly has historical meaning in the realm of computer science, but the practical definition has evolved to include other components of the Data Center; storage in particular. Many find it hard to define, let alone understand how it impacts their data center, and how to even begin measuring it.

We often focus on what we know, and what we can control. However, lack of visibility of influencing factors in the data center does not make it unimportant. Unfortunately this is how working sets are usually treated. It is often not a part of a data center design exercise because it is completely unknown. It is rarely written about for the very same reason. Ironic considering that every modern architecture deals with some concept of localization of data in order to improve performance. Cached content versus it’s persistent home. How much of it is there? How often is it accessed? All of these types of questions are critically important to know.

What is it?
For all practical purposes, a working set refers the amount of data that a process or workflow uses in a given time period. Think of it as hot, commonly accessed data of your overall persistent storage capacity. But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify. What is recent? Does "amount" mean reads, writes, or both? And does it define if it is the same data written over and over again, or is it new data? Let’s explore this more.

There are a several traits of working sets that are worth reviewing.

  • Working sets are driven by the workload, the applications driving the workload, and the VMs that they run on.  Whether the persistent storage is local, shared, or distributed, it really doesn’t matter from the perspective of how the VMs see it.  The size will be largely the same.
  • Working sets always relate to a time period.  However, it’s a continuum.  And there will be cycles in the data activity over time.
  • Working set will comprise of reads and writes.  The amount of each is important to know because reads and writes have different characteristics, and demand different things from your storage system.
  • Working set size refers to an amount, or capacity, but what and how many I/Os it took to make up that capacity will vary due to ever changing block sizes.
  • Data access type may be different.  Is one block read a thousand times, or are a thousand blocks read one time?  Are the writes mostly overwriting existing data, or is it new data?  This is part of what makes workloads so unique.
  • Working set sizes evolve and change as your workloads and data center change.  Like everything else, they are not static.

A simplified, visual interpretation of data activity that would define a working set, might look like below.

image

If a working set is always related to a period of time, then how can we ever define it? Well in fact, you can. A workload often has a period of activity followed by a period of rest. This is sometimes referred to the "duty cycle." A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code. Taking a look over a larger period of time, duty cycles of a VM might look something like below.

image

Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.

Why it matters
Determining a working set sizes helps you understand the behaviors of your workloads in order to better design, operate, and optimize your environment. For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets. Understanding and accurately calculating working sets can have a profound effect on the consistency of a data center. Have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyper-converged environment? This is because both are extremely sensitive to right sizing the caching layer. Not accurately accounting for working set sizes of the production workloads is a common reason for such issues.

Classic methods for calculation
Over the years, this mystery around working set sizes has resulted in all sorts of sad attempts at trying to calculate. Those attempts have included:

  • Calculate using known (but not very helpful) factors.  These generally comprise of looking at some measurement of IOPS over the course of a given time period.  Maybe dress it up with a few other factors to make it look neat.  This is terribly flawed, as it assumes one knows all of the various block sizes for that given workload, and that block sizes for a workload are consistent over time.  It also assumes all reads and writes use the same block size, which is also false.
  • Measure working sets defined on a storage array, as a feature of the array’s caching layer.  This attempt often fails because it sits at the wrong location.  It may know what blocks of data are commonly accessed, but there is no context to the VM or workload imparting the demand.  Most of that intelligence about the data is lost the moment the data exits the HBA of the vSphere host.  Lack of VM awareness can even make an accurately guessed cache size on an array be insufficient at times due to cache pollution from noisy neighbor VMs.
  • Take an incremental backup, and look at the amount of changed data.  This sounds logical, but this can be misleading because it will not account for data that is written over and over, nor does it account for reads.  The incremental time period of the backup may also not be representative of the duty cycle of the workload.
  • Guess work.  You might see "recommendations" that say a certain percentage of your total storage capacity used is hot data, but this is a more formal way to admit that it’s nearly impossible to determine.  Guess large enough, and the impact of being wrong will be less, but this introduces a number of technical and financial implications on data center design. 

Since working sets are collected against activity that occurs on a continuum, calculating a typical working set with a high level of precision is not only impossible, but largely unnecessary.  When attempting to determine working set size of a workload, the goal is to come to a number that reflects the most typical behavior of a single workload, group of workloads, or a total sum of workloads across a cluster or data center.

A future post will detail approaches that should give a sufficient level of understanding on active working set sizes, and help reduce the potential of negative impacts on data center operation due to poor guesswork.

Thanks for reading


A closer look at the new UI for PernixData FVP, and beyond

In many ways, making a good User Interface (UI) seems like a simple task.  As evident by so many software makers over the years, it is anything but simple. A good UI looks elegant to the eye, and will become a part of muscle memory without even realizing it. A bad UI can feel like a cruel joke; designed to tease the brain, and frustrate the user. It’s never done intentionally of course. In fact, bad visual and functional designs happen in any industry all the time. Just think of your favorite ugly car. At some point there was an entire committee that gave it a thumbs up. User Experience (UX) design is also an imperfect science, and the impressions are subject to the eyes of the beholder.

A good UI should present function effortlessly. Make the complex simple. However, it is more than just buttons and menus that factor into a user experience. That larger encompassing UX design is what incorporates among other things, functional requirements with a visual interface that is productive and intuitive. PernixData FVP has always received high marks for that user experience. The product not only accelerated storage I/O, but presented itself in such a way that made it informative and desirable to use.

Why the change?
PernixData products (FVP, and the up and coming Architect) now have a standalone, HTML5 interface using your favorite browser. Moving away from the vSphere Web client was a deliberate move that at first impression might be a bit surprising. With changes in needs and expectations comes the challenge of understanding what is the best way to achieve a desired result. Standalone, traditionally compiled clients are not as appealing as they once were for numerous reasons, so adopting a modern web based framework was important.

Moving to a standalone, pure HTML5 UI built from the ground up allowed for these interactions to be built just the way they should be. It removes limits explicitly or implicitly imposed by someone else’s standards. PernixData gets to step away from the shadows of VMware’s current implementation of FLEX. Removing limitations allows for more flexibility now, and in the future.

UI characteristics
One of the first impressions that will get will be the performance of the UI. It is quick and snappy. UX pain often begins with performance – whether it is the technical speed, or the ability for a user to find what they want quickly. The new UI continues where the older UI left off; telling more with less, and doing so very quickly.

Looking at the image below, you will also see that the UI was designed for the use with multiple products. The framework is used not only for FVP, but for the upcoming release of PernixData Architect. This allows for transitions between products to be fluid, and intuitive.

vmpete-Hub

New search capabilities
In larger environments, isolating and filtering VMs for deeper review is a valuable feature. Not that big of a deal with a few dozen VMs, but get a few hundred or more VMs, and it becomes difficult to keep track. The quick search abilities allow for real time filtering down of VMs based on search criteria. Highlighting those VMs then allows for easy comparison.

vmpete-quicksearch

More granularity with the hero numbers
Hero numbers have been a great way to see how much offload has occurred in an infrastructure. How many I/Os offloaded from the Datastore, how much bandwidth never touched your storage infrastructure due to this offload, and how many writes were accelerated. In previous versions, that number started counting from the moment the FVP cluster was created. In FVP 3.0, you get to choose to see how much offload has occurred over a more granular period of time.

vmpete-heronumbers

New graphs to show cache handling
Previously, the "Hit Rate and Eviction Rate" metric helped express cache usage, and were combined in a single graph. Hit Rate indicated the percentage of reads that were serviced by the acceleration tier. It didn’t measure writes in any way. Eviction Rate indicated the percentage of data that was being evicted from the acceleration tier to make room for new incoming hot data. Each of them now have their own graphs that are more expansive in the information they provide.

As shown below, "Acceleration Rate" is in place of "Hit Rate."  This new metric now accounts for both reads and writes. One thing to note is that writes will only show "accelerated" here when in Write Back mode." Even though Write Back and Write Through populate the cache with the same approach, the green "write" line will only indicate acceleration when the VM or VMs are using a Write Back policy.

vmpete-accelerationrate

"Population and Eviction" (as shown below) replaces the latter half of the "Hit Rate and Eviction Rate" metric. Note that Eviction Rate is no longer measured as a percentage, but by actual amount in GB. This is a better way to view it, as the sizes of acceleration tiers vary, and thus the percentage value varied. Now you can tell more accurately how much data is being evicted at any given time. Population rate is exactly as it sounds. This is going to account for write data being placed into the cache regardless of its Write Policy (Write Back or Write Through), as well as data read for the first time from the backing storage, and placed into the cache (known as a "false write"). This graph provides much more detail about how the cache is being utilized in your environment.

vmpete-populationeviction

Now, if you really want to see some magical charts, and the insights that can be gleaned from them, go take a look at PernixData Architect.  I’ll be covering those graphs in more detail in upcoming posts.

Summary
A lot of new goodies have been packed into the latest version of FVP, but this covers a bit about why the UI was changed, and how PernixData products are in a great position to evolve and meet the demands of the user and the environment.

Related Links

Inside PernixData Engineering – User Interaction Design

Inside PernixData Engineering – UI and Web Technologies

Understanding PernixData FVP’s clustered read caching functionality

When PernixData debuted FVP back in August 2013, for me there was one innovation in particular that stood out above the rest.  The ability to accelerate writes (known as “Write Back” caching) on the server side, and do so in a fault tolerant way.  Leverage fast media on the server side to drive microsecond write latencies to a VM while enjoying all of the benefits of VMware clustering (vMotion, HA, DRS, etc.).  Give the VM the advantage of physics by presenting a local acknowledgement of the write, but maintain all of the benefits of keeping your compute and storage layers separate.

But sometimes overlooked with this innovation is the effectiveness that comes with how FVP clusters acceleration devices to create a pool of resources for read caching (known as “Write Through” caching with FVP). For new and existing FVP users, it is good to get familiar with the basics of how to interpret the effectiveness of clustered read caching, and how to look for opportunities to improve the results of it in an environment. For those who will be trying out the upcoming FVP Freedom edition, this will also serve as an additional primer for interpreting the metrics. Announced at Virtualization Field Day 5, the Freedom Edition is a free edition of FVP with a few limitations, such as read caching only, and a maximum of 128GB tier size using RAM.

The power of read caching done the right way
Read caching alone can sometimes be perceived as a helpful way to improve performance, but temporary, and only addressing one side of the I/O dialogue. Unfortunately, this assertion tells an incomplete story. It is often criticized, but let’s remember that caching in some form is used by almost everyone, and everything.  Storage arrays of all types, Hyper Converged solutions, and even DAS.  Dig a little deeper, and you realize its perceived shortcomings are most often attributed to how it has been implemented. By that I mean:

  • Limited, non-adjustable cache sizes in arrays or Hyper Converged environments.
  • Limited to a single host in server side solutions.  (operations like vMotion undermining its effectiveness)
  • Not VM or workload aware.

Existing solutions address some of these shortcomings, but fall short in addressing all three in order to deliver read caching in a truly effective way. FVP’s architecture address all three, giving you the agility to quickly adjust the performance tier while letting your centralized storage do what it does best; store data.

Since FVP allows you to choose the size of the acceleration tier, this impact alone can be profound. For instance, current NVMe based Flash cards are 2TB in size, and are expected to grow dramatically in the near future. Imagine a 10 node cluster that would have perhaps 20-40TB of an acceleration tier that may be serving up just 50TB of persistent storage. Compare this to a hybrid array that may only put in a few hundred GB of flash devices in an array serving up that same 50TB, and funneling through a pair of array controllers. Flash that the I/Os would still have traverse the network and storage stack to get to, and cached data that is arbitrarily evicted for new incoming hot blocks.

Unlike other host side caching solutions, FVP treats the collection of acceleration devices on each host as a pool. As workloads are being actively moved across hosts in the vSphere cluster, those workloads will still be able to fetch the cached content from that pool using a light weight protocol. Traditionally host based caching would have to re-warm the data from the backend storage using the entire storage stack and traditional protocols if something like a vMotion event occurred.

FVP is also VM aware. This means it understands the identity of each cached block – where it is coming from, and going to -  and has many ways to maintain cache coherency (See Frank Denneman’s post Solving Cache Pollution). Traditional approaches to providing a caching tier meant that they were largely unaware of who the blocks of data were associated with. Intelligence was typically lost the moment the block exits the HBA on the host. This sets up one of the most common but often overlooked scenarios in a real environment. One or more noisy neighbor VMs can easily pollute, and force eviction of hot blocks in the cache used by other VMs. The arbitrary nature of this means potentially unpredictable performance with these traditional approaches.

How it works
The logic behind FVP’s clustered read caching approach is incredibly resilient and efficient. Cached reads for a VM can be fetched from any host participating in the cluster, which allows for a seamless leveraging of cache content regardless of where the VM lives in the cluster. Frank Denneman’s post on FVP’s remote cache access describes this in great detail.

Adjusting the charts
Since we will be looking at the FVP charts to better understand the benefit of just read caching alone, let’s create a custom view. This will allow us to really focus on read I/Os and not get them confused with any other write I/O activity occurring at the same time.

image

 

Note that when you choose a "Custom Breakdown", the same colors used to represent both reads and writes in the default "Storage Type" view will now be representing ONLY reads from their respective resource type. Something to keep in mind as you toggle between the default "Storage Type" view, and this custom view.

SNAGHTML16994fda

Looking at Offload
The goal for any well designed storage system is to deliver optimal performance to the applications.  With FVP, I/Os are offloaded from the array to the acceleration tier on the server side.  Read requests will be delivered to the VMs faster, reducing latency, and speeding up your applications. 

From a financial investment perspective, let’s not forget the benefit of I/O “offload.”  Or in other words, read requests that were satisfied from the acceleration tier. Using FVP, offload from the storage arrays serving the persistent storage tier, from the array controllers, from the fabric, and the HBAs. The more offload there is, the less work for your storage arrays and fabric, which means you can target more affordable backend storage. The hero numbers showcase the sum of this offload nicely.image

Looking at Network acceleration reads
Unlike other host based solutions, FVP allows for common activities such as vMotions, DRS, and HA to work seamlessly without forcing any sort of rewarming of the cache from the backend storage. Below is an example of read I/O from 3 VMs in a production environment, and their ability to access cached reads on an acceleration device on a remote host.

SNAGHTML1bcf7ee

Note how the Latency maintains its low latency on those read requests that came from a remote acceleration device (the green line).

How good is my read caching working?
Regardless of which write policy (Write Through or Write Back) is being used in FVP, the cache is populated in the same way.

  • All read requests from the backing array will place the data into the acceleration tier as it fetches it from the backing storage.
  • All write I/O is placed in the cache as it is written to the physical storage.
    Therefore, it is easy to conclude that if read I/Os did NOT come from acceleration tier, it is from one of three reasons.
  • A block of data had been requested that had never been requested before.
  • The block of data had not been written recently, and thus, not residing in cache.
  • A block of data had once lived in the cache (via a read or write), but had been evicted due to cache size.

The first two items reflect the workload characteristics, while the last one is a result of a design decision – that being the cache size. With FVP you get to choose how large the devices are that make up the caching tier, so you can determine ultimately how much the solution will benefit you. Cache size can have a dramatic impact on performance because there is less pressure to evict previous data that have already been cached to make room for new data.

Visualizing the read cache usage
This is where the FVP metrics can tell the story. When looking at the "Custom Breakdown" view described earlier in this post, you can clearly see on the image below that while a sizable amount of reads were being serviced from the caching tier, the majority of reads (3,500+ IOPS sustained) in this time frame (1 week) came from the backing datastore.

SNAGHTML168a6c5f

Now, let’s contrast this to another environment and another workload. The image below clearly shows a large amount of data over the period of 1 day that is served from the acceleration tier. Nearly all of the read I/Os and over 60MBps of throughput that never touched the array.

SNAGHTML16897d40

When evaluating read cache sizing, this is one of the reasons why I like this particular “Custom Breakdown” view so much. Not only does it tell you how well FVP is working at offloading reads. It tells you the POTENTIAL of all reads that *could* be offloaded from the array.  You get to choose how much offload occurs, because you decide on how large your tier size is, or how many VMs participate in that tier.

Hit Rate will also tell you the percentage of reads that are coming from the acceleration tier at any point and time. This can be an effective way to few cache hit frequency, but to gain more insight, I often rely on this "Custom Breakdown" to get better context of how much data is coming from the cache and backing datastores at any point in time. Eviction rate can also provide complimentary information if it shows the eviction rate creeping upward.  But there can be cases were lower eviction percentages may evict enough cached data over time that it can still impact if it is still in cache.  Thus the reason why this particular "Custom Breakdown" is my favorite for evaluating reads.

What might be a scenario for seeing a lot of reads coming from a backing datastore, and not from cache? Imagine running 500 VMs in an acceleration tier size of just a few GB. The working set sizes are likely much larger than the cache size, and will result in churning through the cache and not show significant demonstrable benefit. Something to keep in mind if you are trying out FVP with a very small amount of RAM as an acceleration resource. Two effective ways to make this more efficient would be to 1.) increase the cache size or 2.) decrease the number of VMs participating in acceleration. Both will achieve the same thing; providing more potential cache tier size for each VM accelerated. The idea for any caching layer is to have it large enough to hold most of the active data (aka "working set") in the tier. With FVP, you get to easily adjust the tier size, or the VMs participating in it.

Don’t know what your working set sizes are?  Stay tuned for PernixData Architect!

Summary
Once you have a good plan for read caching with FVP, and arrange for a setup with maximum offload, you can drive the best performance possible from clustered read caching. On it’s own, clustered read caching implemented the way FVP does it can change the architectural discussion of how you design and spend those IT dollars.  Pair this with write-buffering with the full edition of FVP, and it can change the game completely.