Dogs, Rush hour traffic, and the history of storage I/O benchmarking–Part 2

Part one of "History of storage I/O Benchmarking" attempted to demonstrate how Synthetic Benchmarks on their own simply cannot generate or measure storage performance in a meaningful way. Emulating real workloads seems like such a simple matter to solve, but in truth it is a complex problem involving technical, and non-technical challenges.

  • Assessing workload characteristics is often left for conjecture.  Understanding the correct elements to observe is the first step to simulating them for testing, but how the problem is viewed is often left for whatever tool is readily available.  Many of these tools may look at the wrong variables.  A storage array might have great tools for monitoring the array, but is an incomplete view as it relates to the VM or application performance.
  • Understanding performance in a Datacenter crosses boundaries of subject matter expertise.  A traditional Storage Administrator will see the world in the same way the array views it.  Blocks, LUNS, queues, and transport protocols.  Ask them about performance and be prepared for a monologue on rotational latencies, RAID striping efficiencies and read/write handling.  What about the latency as seen by the VM?  Don’t be surprised if that is never mentioned.  It may not even be their fault, since their view of the infrastructure may be limited by access control.
  • When introducing a new solution that uses a technology like Flash, the word itself is seen as a superlative, not a technology.  The name implies instant, fast, and other super-hero like qualities.  Brilliant industry marketing, but it comes at a cost.  Storage solutions are often improperly tested after some technology with Flash is introduced because conventional wisdom says it is universally faster than anything in the past.  A simplified and incorrect assertion.

Evaluating performance demands a balance of understanding the infrastructure, the workloads, and the software platforms they run on. This takes time and the correct tools for insight – something most are lacking. Part one described the characteristics of real workloads that are difficult to emulate, plus the flawed approach of testing in a clustered compute environment. Unfortunately, it doesn’t end there. There is another factor to be considered; the physical characteristics of storage performance tiering layers, and the logic moving data between those layers.

Storage Performance tiering
Most Datacenters deliver storage performance using multiple persistent storage tiers and various forms of caching and buffering. Synthetic benchmarks force a behavior on these tiers that may be unrealistic. Many times this is difficult to decipher, as the tier sizes and data handling can be obfuscated by a storage vendor or unknown by the tester. What we do know is that storage tiering can certainly come in all shapes and sizes. Whether it traditional array with data progression techniques, a hybrid array, a decoupled architecture like PernixData FVP, or a Hyper Converged solution. The reality is that this tiering occurs all the time

With that in mind, there are two distinct approaches to test these environments.

  • Testing storage in a way to guarantee no I/O data comes from and goes to a top performing tier.
  • Testing storage in a way to guarantee that all I/O data comes from and goes to a top performing tier.

Which method is right for you? Both methods are neither right nor wrong as each can serve a purpose. Let’s use the car analogy again

  • Some might be averse to driving an electric car that only has a 100 mile range.  But what if you had a commute that rarely ever went more than 30 miles a day?  Think of that as like a caching/buffering tier.  If a caching layer is large enough that it might serve that I/O 95% of the time, well then, it may not be necessary to focus on testing performance from that lower tier of storage. 
  • In that same spirit, let’s say that same owner changed jobs and drove 200 miles a day.  That same car is a pretty poor solution for the job.  Similarly, if a storage array had just 20GB of caching/buffering for 100TB of persistent storage, the realistic working set size of each of the VMs that live on that storage would realize very little benefit from that 20GB of caching space.  In that case, it would be better to test the performance of the lower tier of storage.

What about testing the storage in a way to guarantee that data comes from all tiers?  Mixing a combination of the two sounds ideal, but often will not simulate the way real data will reside on the tiers, and produces a result that is difficult to determine if it reflects the way a real workload will behave. Due to the lack of identifying these caching tier sizes, or no true way to isolate a tier, this ironically ends up being the approach most commonly used – by accident alone.

When generating synthetic workloads that have a large proportion of writes, it can often be quite easy to hit buffer limit thresholds. Once again this is due to a benchmark committing every CPU cycle as a write I/O and for unrealistic periods of time. Even in extremely write intensive environments, this is completely unrealistic. It is for that reason that one can create a behavior with a synthetic benchmark against a tiered storage solution that rarely, if ever, happens in a real world environment.

When generating read I/O based synthetic tests using a large test file, those reads may sometimes hit the caching tier, and other times hit the slowest tier, which may show sporadic results. The reaction to this result often leads to running the test longer. The problem however is the testing approach, not the length of the test. Understanding the working set size of a VM is key, and should dictate how best to test in your environment. How do you determine a working set size? Let’s save that for a future post. Ultimately it is real workloads that matter, so the more you can emulate the real workloads, the better.

Storage caching population and eviction. Not all caching is the same
Caching layers in storage solutions can come in all shapes and sizes, but they depend on rules of engagement that may be difficult to determine. An example of two very important characteristics would be:

  • How they place data in cache.  Is some sort of predictive "data progression" algorithm being used?  Are the tiers using Write-Through caching to populate the cache in addition to population from data fetched from the backend storage. 
  • How they choose to evict data from cache.  Does the tier use "First-in-First-Out" (FIFO), Least Recently Used (LRU), Least Frequently Used (LFU) or some other approach for eviction?

Synthetic benchmarks do not accommodate this well.  Real world workloads will depend highly on them however, and the differences show up only in production environments.

Other testing mistakes
As if there weren’t enough ways to screw up a test, here are a few other common storage performance testing mistakes.

  • Not testing as close to the application level as possible.  This sort of functional testing will be more in line with how the application (and OS it lives on) handles real world data.
  • Long test durations.  Synthetic benchmarks are of little use when running an exhaustive (multi-hour) set of tests.  It tells very little, and just wastes time.
  • Overlooking a parameter on a benchmark.  Settings matter because they can yield very different results.
    Misunderstanding the read/write ratios of an environment.  Are you calculating your ratio by IOPS, or Throughput?  This alone can lead to two very different results.
  • Misunderstanding of typical I/O sizes in organization for reads and writes.  How are you choosing to determine what the typical I/O size is?
  • Testing reads and writes like two independent objectives.  Real workloads do not work like this, so there is little reason to test like this.
  • Using a final ‘score’ provided by a benchmark.  The focus should be on the behavior for the duration of the test.  Especially with technologies like Flash, careful attention should be paid to side effects from garbage collection techniques and other events that cause latency spikes. Those spikes matter.

Testing organizations often are vying for a position as a testing authority, or push methods or standards that somehow eliminate the mistakes described in this blog post series. Unfortunately that is not the case, but it does not matter anyway, as it is your data, and your workloads that count.

Making good use of synthetic benchmarks
It may come across that Synthetic Benchmarks or Synthetic Load Generators are useless. That is untrue. In fact, I use them all the time. Just not the way they conventional wisdom indicates. The real benefit comes once you accept the fact that they do not simulate real workloads. Here are a few scenarios in which they are quite useful.

  • Steady-state load generation.  This is especially useful in virtualized environments when you are trying to create load against a few systems.  It can be a great way to learn and troubleshoot.
  • Micro-benchmarking.  This is really about taking a small snippet of a workload, and attempting to emulate it for testing and evaluation.  Often times the test may only be 5 to 30 seconds, but will provide a chance to capture what is needed.  It’s more about generating I/O to observe behavior than testing absolute performance.  Look here for a good example.
  • Comparing independent hardware components.  This is a great way to show differences an old and new SSD.
  • Help provide broader insight to the bigger architectural picture.

Observe, Learn and Test
To avoid wasting time "testing" meaningless conditions, spend some time in vCenter, esxtop, and other methods to capture statistics. Learn about your existing workloads before running a benchmark. Collaborating with an internal application owner can make better use of your testing efforts. For instance, if you are looking to improve your SQL performance, create a series of tests or modify an existing batch job to run inside of SQL to establish baselines and observe behavior. Test at the busiest time and the quietest time of the day, as they both provide great data points. This approach was incredibly helpful for me when I was optimizing an environment for code compiling.

Try not to lose sight of the fact that testing storage performance is not about testing an array. It’s about testing how your workloads behave against your storage architecture. Real applications always tell the real story. The reason why most dislike this answer is that it is difficult to repeat, and challenging to measure the right way. Testing the correct way can mean you might spend a little time better understanding the demand your applications put on your environment.

And here you thought you ran out of things to do for the day. Happy testing.







Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s