Reworking my PowerConnect 6200 switches for my iSCSI SAN

It sure is easy these days to get spoiled with the flexibility of virtualization and shared storage.  Optimization, maintenance, fail-over, and other adjustments are so much easier than they used to be.  However, there is an occasional reminder that some things are still difficult to change.  For me, that reminder was my switches I use for my SAN.

One of the many themes I kept hearing at this year’s Dell Storage Forum (a great experience I must say) throughout several of the breakout sessions I went to was “get your SAN switches configured correctly.”  A nice reminder to something I was all too aware of already; my Dell PowerConnect 6224 switches were not configured correctly since the day they replaced my slightly less capable (but rock solid) PowerConnect 5424’s.  I returned from the forum committed to getting my switchgear updated and configured the correct way.  Now for the tough parts…  What does “correct” really mean when it comes to the 6200 series switches?  And why didn’t I take care of this a long time ago?  Here are just a few excuses reasons. 

  • At the time of initial deployment, I had difficulty tracking down documentation written specifically for the 6224’s to be configured with iSCSI.  Eventually, I did my best to interpret the configuration settings of the 5424’s, and apply the same principals to the 6224’s.  Unfortunately, the 6224’s are a different animal than the 5424’s, and that showed up after I placed them into production – a task that I regretfully rushed.
  • When I deployed them into production, the current firmware was the 2.x generation.  It was my understanding after the deployment that the 2.x firmware on the 6200 series definitely had growing pains.  I also had the unfortunate timing that the next major revision came out shortly after I put them into production.
  • I had two stacked 6224 switches running my production SAN environment (a setup that was quite common for those I asked at the Dell Storage Forum). While experimenting with settings might be fun in a lab, it is no fun, and serious business when they are running a production environment. I wanted to make adjustments just once, but had difficulty confirming settings.
  • When firmware needs to be updated (a conclusion to an issue I was reporting to Technical Support), it is going to take down the entire stack.  This means that you’d better have everything that uses the SAN off unless you like living dangerously.  Major firmware updates will also require the boot code in each switch to be updated.  A true “lights out” maintenance window that required everything to be shut down.  The humble little 5424’s LAGd together didn’t have that problem.
  • The 2.x to 3.x firmware update also required the boot code to be updated.  However, you simply couldn’t run an “update bootcode” command.  The documentation made this very clear.  The PowerConnect Technical Support Team indicated that the two versions ran different algorithms to unpack the contents, which was the reason for yet another exception to the upgrade process. 

One of the many best practices recommended at the Forum was to stack the switches instead of LAGing them.  Stack, stack, stack was drilled into everyone’s head.  The reasons are very good, and make a lot of sense.

  • Stacking modules in many ways extend the circuiting of a single switch, thus the stacking module doesn’t have to honor or be limited by traditional Ethernet.
  • Managing one switch manages them all.
  • Better, more scalable bandwidth between switches
  • No messing around with LAG’s

But here lays the conundrum of many Administrators who are responsible for production environments.  While stacked 6224’s offer redundancy against hardware failure, they offer no redundancy when it comes to maintenance.  These stacked switches are seen as one logical unit, and may be your weakest link when it comes to maintenance of your virtualized infrastructure.  Interestingly enough, when inquiring further on effective strategies for updating under this topology, I observed a few things;  many other users who were stuck with this very same dilemma, and the answers provided weren’t too exciting.  There were generally three answers I heard from this design decision:

  • Plan for a “lights out” maintenance window.
  • Buy another set of two switches, stack those, then trunk the two together via 10Gbe,
  • Buy better switches. 

See why I wasn’t too excited about my options?

Decision time.  I knew I’d suffer a bit of downtime updating the firmware and revamping the configuration no matter what I did.  Do I stack them as recommended, only to be faced with the same dilemma on the next firmware upgrade?  Or do I LAG the switches together so that I avoid this upgrade fiasco in the future?  LAG’ing is not perfect either, and the more arrays I add (as well as the inter-array traffic increasing with new array features), the more it might compound some of the limitations of LAGs. 

What option won out?  I decided to give stacking ONE more try.  I had to keep the eye on my primary objective; correcting my configuration by way of firmware upgrade and build up a simple, pristine configuration from scratch.  The idea was that the configuration would initially contain the minimum set of modifications to get them working according to best practices.  Then, I could build off of the configuration in the future.  Also influencing my decision was finding out that recommended settings with LAGs apparently change frequently.  For instance, just recently, the recommended setting for flow control for the port channel in a LAG was changed.  These are the types of things I wanted to stay away from.  But with that said, I will continue to keep the option open to LAGing them, for the sole reason that it offers the flexibility for maintenance without shutting down your entire cluster.

So here was my minimum desired results for the switch stack after the upgrade and reconfiguration.  Pretty straight forward. 

  • Management traffic on another VLAN (VLAN 10) on port 1 (for uplinking) and port 2 (for local access).
  • iSCSI traffic on it’s own VLAN (VLAN 100), on all ports not including the management ports.
  • Essentially no traffic on the Default VLAN
  • Recommended global and port specific settings (flow control, spanning tree, jumbo frames, etc.) for iSCSI traffic endpoint connections
  • iSCSI traffic that was available to be routed through my firewall (for replication).

My configuration rework assumed the successful boot code and firmware upgrade to version 3.2.1.3.  I pondered a few different ways to speed this process up, but ultimately just followed the very good steps provided with the documentation for the firmware.  They were clear, and accurate.

By the way, on June 20th, 2011, Dell released their very latest firmware update (thank you RSS feed) to 3.2.1.3 A23.  This now includes their “Auto Detection” of ports for iSCSI traffic.  Even though the name implies a feature that might be helpful, the documentation did not provide enough information needed, and I decided to manually configure as originally planned.

For those who might be in the same boat as I was, here were the exact steps I did for building up a pristine configuration after updating the firmware and boot code.  The configuration below was definitely a combined effort by the folks from the EqualLogic and PowerConnect Teams, and me pouring over a healthy amount of documentation.  It was my hope that this combined effort would eliminate some of the contradictory information I found in previous best practices articles, forum threads, and KB articles that assumed earlier firmware.  I’d like to thank them for being tolerant of my attention to detail, and to get this right the first time.  You’ll see that the rebuild steps are very simple.  Getting confirmation on this was not.

Step 1:  Reset the switch to defaults (make a backup of your old config, just in case)
enable
delete startup-config
reload

 
Step 2:  When prompted, follow the setup wizard in order to establish your management IP, etc. 
 
Step 3:  Put the switch into admin and configuration mode.
enable
configure

 
Step 4:  Establish Management Settings
hostname [yourstackhostname]
enable password [yourenablepassword]
spanning-tree mode rstp
flowcontrol

 
Step 5: Add the appropriate VLAN IDs to the database and setup interfaces.
vlan database
vlan 10
vlan 100
exit
interface vlan 1
exit
interface vlan 10
name Management
exit
interface vlan 100
name iSCSI
exit
ip address vlan 10
 
Step 6: Create an Etherchannel Group for Management Uplink
interface port-channel 1
switchport mode access
switchport access vlan 10
exit
NOTE: Because the switches are stacked, port one on each switch will be configured in this channel-group which can then be connected to their core switch or intermediate switch for management access. Port two on each switch can be used if they need to plug a laptop into the management VLAN, etc.
 
Step 7: Configure/assign Port 1 as part of the management channel-group:
interface ethernet 1/g1
switchport access vlan 10
channel-group 1 mode auto
exit
interface ethernet 2/g1
switchport access vlan 10
channel-group 1 mode auto
exit
 
Step 8: Configure Port 2 as Management Access Switchports (not part of the channel-group):
interface ethernet 1/g2
switchport access vlan 10
exit
interface ethernet 2/g2
switchport access vlan 10
exit
 
Step 9: Configure Ports 3-24 as iSCSI access Switchports
interface range ethernet 1/g3-1/g24
switchport access vlan 100
no storm-control unicast
spanning-tree portfast
mtu 9216
exit
interface range ethernet 2/g3-2/g24
switchport access vlan 100
no storm-control unicast
spanning-tree portfast
mtu 9216
exit
NOTE:  Binding the xg1 and xg2 interfaces into a port-channel is not required for stacking. 
 
Step 10: Exit from Configuration Mode
exit
 
Step 11: Save the configuration!
copy running-config startup-config

Step 12: Back up the configuration
console#copy startup-config tftp://[yourTFTPip]/conf.cfg

In hindsight, the most time consuming aspect of all of this was trying to confirm the exact settings for the 6224’s in an iSCSI SAN.  Running in second was shutting down all of my VMs, ESX hosts, and anything else that connected to the SAN switchgear.  The upgrade and the rebuild was relatively quick and trouble-free.  I’m thrilled to have this behind me now, and I hope that by passing this information along, you too will have a very simple working example to build your configuration off of.  As for the 6224’s, they are working fine now.  I will continue to keep my fingers crossed that Dell will eventually provide a way to update firmware to a stacked set of 6200 series switches without a lights out maintenance window.

Zero to 32 Terabytes in 30 minutes. My new EqualLogic PS4000e

Rack it up, plug it in, and away you go.  Those are basically the steps needed to expand a storage pool by adding another PS array using the Dell/EqualLogic architecture.  A few weeks ago I took delivery of a new PS4000e to compliment my PS6000e at my primary site.  The purpose of this additional array was really simple.  We needed raw storage capacity.  My initial proposal and deployment of my virtualized infrastructure a few years ago was a good one, but I deliberately did not include our big flat-file storage servers in this initial scope of storage space requirements.  There was plenty to keep me occupied between the initial deployment, and now.  It allowed me to get most of my infrastructure virtualized, and gave a chance for buy-in to the skeptics who thought all of this new-fangled technology was too good to be true.  Since that time, storage prices have fallen, and larger drive sizes have become available.  Delaying the purchase aligned well with “just-in-time” purchasing principals, and also gave me an opportunity to address the storage issue in the correct way.   At first, I thought all of this was a subject matter not worthy of writing about.  After all, EqualLogic makes it easy to add storage.  But that only addresses part of the problem.  Many of you face the same dilemma regardless of what your storage solution is; user facing storage growth.

Before I start rambling about my dilemma, let me clarify what I mean by a few terms I’ll be using; “user facing storage” and “non user facing storage.” 

  • User Facing Storage is simply the storage that is presented to end users via file shares (in Windows) and NFS mounts (in Linux).  User facing storage is waiting there, ready to be sucked up by an overzealous end user. 
  • Non User Facing Storage is the storage occupied by the servers themselves, and the services they provide.  Most end users generally have no idea on how much space a server reserves for say, SQL databases or transaction logs (nor should they!)  Non user facing storage is easier to anticipate needs and manage because it is only exposed to system administrators. 

Which array…

I decided to go with the PS4000e because of the value it returns, and how it addresses my specific need.  If I had targeted VDI or some storage for other I/O intensive services, I would have opted for one of the other offerings in the EqualLogic lineup.  I virtualized the majority of my infrastructure on one PS6000e with 16, 1TB drives in it, but it wasn’t capable of the raw capacity that we now needed to virtualize our flat-file storage.  While the effective number of 1GB ports is cut in half on the PS4000e as compared to the PS6000e, I have not been able to gather any usage statistics against my traditional storage servers that suggest the throughput of the PS4000e will not be sufficient.  The PS4000e allowed me to trim a few dollars off of my budget line estimates, and may work well at our CoLo facility if we ever need to demote it.

I chose to create a storage pool so that I could keep my volumes that require higher performance on the PS6000, and have the dedicated storage volumes on the PS4000.  I will do the same for when I eventually add other array types geared for specific roles, such as VDI.

Truth be told, we all know that 16, 2 terabyte drives does not equal 32 Terabytes of real world space.  RAID50 penalty knocks that down to about 21TB.  Cut that by about half for average snapshot reserves, and it’s more like 11TB.  Keeping a little bit of free pools space available is always a good idea, so let’s just say it effectively adds 10TB of full fledged enterprise class storage.  This adds to my effective storage space of 5TB on my PS6000.  Fantastic.  …but wait, one problem.  No, several problems.

The Dilemma

Turning up the new array was the easy part.  In less than 30 minutes, I had it mounted, turned on, and configured to work with my existing storage group.  Now for the hard part; figuring out how to utilize the space in the most efficient way.  User facing storage is a wildcard; do it wrong and you’ll pay for it later.  While I didn’t know the answer, I did know some things that would help me come to an educated decision.

  • If I migrate all of the data on my remaining physical storage servers (two of them, one Linux, and one Windows) over to my SAN, it will consume virtually all of my newly acquired storage space.
  • If I add a large amount of user-facing storage, and present that to end users, it will get sucked up like a vacuum.
  • If I blindly add large amounts of great storage at the primary site without careful thought, I will not have enough storage at the offsite facility to replicate to.
  • Large volumes (2TB or larger) not only run into technical limitations, but are difficult to manage.  At that size, there may also be a co-mingling of data that is not necessarily business critical.  Doling out user facing storage in large volumes is easy to do.  It will come back to bite you later on.
  • Manipulating the old data in the same volume as new data does not bode well for replication and snapshots, which look at block changes.  Breaking them into separate volumes is more effective.
  • Users will not take the time or the effort clean up old data.
  • If data retention policies are in place, users will generally be okay with it after a substantial amount of complaining. It’s not too different than the complaining you might here when there are no data retention policies, but you have no space.  Pick your poison.
  • Your users will not understand data retention policies if you do not understand them.  Time for a plan.

I needed a way to compartmentalize some of the data so that it could be identified as “less important” and then perhaps live on less important storage.  By “less important storage” this could mean that it lives on a part of the SAN that is not replicated, or in a worst case scenario, on even some old decommissioned physical servers, where it resides for a defined amount of time before it is permanently archived and removed from the probationary location.

The Solution (for now)

Data Lifecycle management.  For many this means some really expensive commercial package.  This might be the way to go for you too.  To me, this is really nothing more than determining what is important data, and what isn’t as important, and having a plan to help automate the demotion, or retirement of that data.  However, there is a fundamental problem of this approach.  Who decides what’s important?  What are the thresholds?  Last accessed time?  Last modified time?  What are the ramifications of cherry-picking files from a directory structure because they exceed policy thresholds?  What is this going to break?  How easy is it to recover data that has been demoted?  There are a few steps that I need to do to accomplish this. 

1.  Poor man’s storage tiering.  If you are out of SAN space, re-provision an old server.  The purpose of this will be to serve up volumes that can be linked to the primary storage location through symbolic links.  These volumes can then be backed up at a less frequent interval, as it would be considered less important.  If you eventually have enough SAN storage space, these could be easily moved onto the SAN, but in a less critical role, or on a SAN array that has larger, slower disks.

2.  Breaking up large volumes.  I’m convinced that giant volumes do nothing for you when it comes to understanding and managing the contents.  Turning larger blobs into smaller blobs also serves another very important role.  It allows the intelligence of the EqualLogic solutions to do their work on where the data should live in a collection of arrays.  A storage Group that consists of say, an SSD based array, a PS6000, and a PS4000 can effectively store the volumes in the correct array that best suites the demand.

3.  Automating the process.  This will come in two parts; a.) deciding on structure, policies, etc. and b.) making or using tools to move the files from one location to another.  On the Linux side, this could mean anything from a bash script, or something written in python.  Then use cron to schedule the occurrence.  In Windows, you could leverage PowerShell, vbscript, or batch files.  This would be as simple, or as complex as your needs require.  However, if you are like me, you have limited time to tinker with scripting.  If there is something turn-key that does the job, go for it.  For me, that is an affordable little utility called “TreeSize Pro”  This gives you not only the ability to analyze the contents of NTFS volumes, but can easily automate the pruning of this data to another location.

4.  Monitoring the result.  This one is easy to overlook, but you will need to monitor the fruits of your labor, and make sure it is doing what it should be doing; maintaining available storage space on critical storage devices.  There are a handful of nice scripts that have been written for both platforms that help you monitor free storage space at the server level.

The result

The illustration below helps demonstrate how this would work. 

image

As seen below, once a system is established to automatically move and house demoted data, you can more effectively use storage on the SAN.

image

Separation anxiety…

In order to make this work, you will have to work hard in making sure that the all of this is pretty transparent to the end user.  If you have data that has complex external references, you would want to preserve the integrity of the data that relies on those dependent files.  Hey, I never said this was going to be easy. 

A few things worth remembering…

If 17 years in IT, and a little observation in human nature has taught me one thing, it is that we all undervalue our current data, and overvalue our old data.  You see it time and time again.  Storage runs out, and there are cries for running down to the local box store and picking up some $99 hard drives.  What needs to reside on there is mission critical (hence the undervaluing of the new data).  Conversely, efforts to have users clean up old data from 10+ years ago had users hiding files in special locations, even though it was recorded that it had not been modified, or even accessed in 4+ years.  All of this of course lives on enterprise class storage.  An all too common example of overvaluing old data.

Tip.  Remember your Service Level Agreements.  It is common in IT to not only have SLAs for systems and data, but for one’s position.  These without doubt are tied to one another.  Make sure that one doesn’t compromise the other.  Stop gap measures to accommodate more storage will trigger desperate, affordable solutions.  (e.g. adding cheap non-redundant drives in an old server somewhere).  Don’t do it!  All of those arm-chair administrators in your organization will be nowhere to be found when those drives fail, and you are left to clean up the mess.

Tip.  Don’t ever thin provision user facing storage.  Fortunately, I was lucky to be clued into this early on, but I could only imagine the well intentioned administrator who wanted to present a nice amount of storage space to the user, only to find it sucked up a few days later.  Save the thin provisioning for non user facing storage (servers with SQL databases and transaction logs, etc.)

Tip.  If you are presenting proposals to management, or general information updates to users, I would suggest quoting only the amount of effective, usable space that will be added.  In other words, don’t say you are adding 32TB to your storage infrastructure, when in fact, it is closer to 10TB.  Say that it is 10TB of extremely sophisticated, redundant enterprise class storage that you can “bet the business” on.  It’s scalability, flexibility and robustness is needed for the 24/7 environments we insist upon.  It will just make it easier that way.

Tip.  It may seem unnecessary to you, but continue to show off snapshots, replication, and other unique aspects of SAN storage, if you still have those who doubt the power of this kind of technology – especially when they see the cost per TB.  Repeat to them how long (if even possible) it would take to protect that same data under traditional storage.  Do everything you can to help those who approve these purchases.  More than likely, they won’t be as impressed by say, how quick a snapshot is, but rather, shocked how traditional storage can’t be protected very well.

You may have noticed I do not have any rock-solid answers for managing the growth and sustainability of user facing data.  Situations vary, but the factors that help determine that path for a solution are quite similar.  Whether you decide on a turn-key solution, or choose to demonstrate a little ingenuity in times of tight budgets, the topic is one that you will probably have to face at some point.

 

Replication with an EqualLogic SAN; Part 4

 

If you had asked me 6+ weeks ago how far along my replication project would be on this date, I would have thought I’d be basking in the glory of success, and admiring my accomplishments.

…I should have known better.

Nothing like several IT emergencies unrelated to this project to turn one’s itinerary into garbage.  A failed server (an old physical storage server that I don’t have room on my SAN for), a tape backup autoloader that tanked, some Exchange Server and Domain Controller problems, and a host of other odd things that I don’t even want to think about.  It’s overlooked how much work it takes to keep an IT infrastructure from not losing any ground from the day before.  At times, it can make you wonder how any progress is made on anything.

Enough complaining for now.  Lets get back to it.  

 

Replication Frequency

For my testing, all of my replication is set to occur just once a day.  This is to keep it simple, and to help me understand what needs to be adjusted when my offsite replication is finally turned up at the remote site.

I’m not overly anxious to turn up the frequency even if the situation allows.  Some pretty strong opinions exist on how best to configure the frequency of the replicas.  Do a little bit with a high frequency, or a lot with a low frequency.  What I do know is this.  It is a terrible feeling to lose data, and one of the more overlooked ways to lose data is for bad data to overwrite your good data on the backups before you catch it in time to stop it.  Tapes, disk, simple file cloning, or fancy replication; the principal is the same, and so is the result.   Since the big variable is retention period, I want to see how much room I have to play with before I decide on frequency.  My purpose of offsite replication is disaster recovery.  …not to make a disaster bigger.

 

Replication Sizes

The million dollar question has always been how much changed data, as perceived from the SAN will occur for a given period of time, on typical production servers.  It is nearly impossible to know this until one is actually able to run real replication tests.  I certainly had no idea.  This would be a great feature for Dell/EqualLogic to add to their solution suite.  Have a way for a storage group to run in a simulated replication where it simply collects statistics that would accurately reflect the amount of data that would be replicate during the test period.  What a great feature for those looking into SAN to SAN replication.

Below are my replication statistics for a 30 day period, where the replicas were created once per day, after the initial seed replica was created.

Average data per day per VM

  • 2 GB for general servers (service based)
  • 3 GB for servers with guest iSCSI attached volumes.
  • 5.2 GB for code compiling machines

Average data per day for guest iSCSI attached data volumes

  • 11.2 GB for Exchange DB and Transaction logs (for a 50GB database)
  • 200 MB for a SQL Server DB and Transaction logs
  • 2 GB for SharePoint DB and Transaction logs

The replica sizes for the VM’s were surprisingly consistent.  Our code compiling machines had larger replica sizes, as they write some data temporarily to the VM’s during their build processes.

The guest iSCSI attached data volumes naturally varied more from day-to-day activities.  Weekdays had larger amounts of replicated data than weekends.  This was expected.

Some servers, and how they generate data may stick out like sore thumbs.  For instance, our source code control server uses a crude (but important) way of an application layer backup.  The result is that for 75 GB worth of repositories, it would generate 100+ GB of changed data that it would want to replicate.  If the backup mechanism (which is a glorified file copy and package dump) is turned off, the amount of changed data is down to a very reasonable 200 MB per day.  This is a good example of how we will have to change our practices to accommodate replication.

 

Decreasing the amount of replicated data

Up to this point, the only step to reduce the amount of data replication is the adjustment made in vCenter to move the VM’s swap files off onto another VMFS volume that will not be replicated.  That of course only affects the VM’s paging files – not the guest VM’s paging files that are controlled by the OS.  I suspect that a healthy amount of changed data on the VMs are the paging files for the OS.  The amount of changed data on those VM’s looked suspiciously similar to the amount of RAM assigned to the VM.  There typically is some correlation to how much RAM an OS has to run with, and the size of the page file.  This is pure speculation at this point, but certainly worth looking into.

The next logical step would be to figure out what could be done to reconfigure VM’s to perhaps place their paging/swap files in a different, non-replicated location.   Two issues come to mind when I think about this step. 

1.)  This adds an unknown amount of complexity (for deploying, and restoring) to the systems running.  You’d have to be confident in the behavior of each OS type when it comes to restoring from a replica where it expects to see a page file in a certain location, but does not.  How scalable this approach is would also need to be asked.  It might be okay for a few machines, but how about a few hundred?  I don’t know.

2.)  It is unknown as to how much of a payoff there will be.  If the amount of data per VM gets reduced by say, 80%, then that would be pretty good incentive.  If it’s more like 10%, then not so much.  It’s disappointing that there seems to be only marginal documentation on making such changes.  I will look to test this when I have some time, and report anything interesting that I find along the way.

 

The fires… unrelated, and related

One of the first problems to surface recently were issues with my 6224 switches.  These were the switches that I put in place of our 5424 switches to provide better expandability.  Well, something wasn’t configured correctly, because the retransmit ratio was high enough that SANHQ actually notified me of the issue.  I wasn’t about to overlook this, and reported it to the EqualLogic Support Team immediately.

I was able to get these numbers under control by reconfiguring the NIC’s on my ESX hosts to talk to the SAN with standard frames.  Not a long term fix, but for the sake of the stability of the network, the most prudent step for now.

After working with the 6224’s, they do seem to behave noticeably different than the 5242’s.  They are more difficult to configure, and the suggested configurations from the Dell documentation seem were more convoluted and contradictory.  Multiple documents and deployment guides had inconsistent information.  Technical Support from Dell/EqualLogic has been great in helping me determine what the issue is.  Unfortunately some of the potential fixes can be very difficult to execute.  Firmware updates on a stacked set of 6224’s will result in the ENTIRE stack rebooting, so you have to shut down virtually everything if you want to update the firmware.  The ultimate fix for this would be a revamp of the deployment guides (or lets try just one deployment guide) for the 6224’s that nullifies any previous documentation.  By way of comparison, the 5424 switches were, and are very easy to deploy. 

The other issue that came up was some unexpected behavior regarding replication, and it’s use of free pool space.  I don’t have any empirical evidence to tie these two together, but this is what I had observed.

During this past month in which I had an old physical storage server fail on me, there was a moment where I had to provision what was going to be a replacement for this box, as I wasn’t even sure if the old physical server was going to be recoverable.  Unfortunately, I didn’t have a whole lot of free pool space on my array, so I had to trim things up a bit, to get it to squeeze on there.  Once I did, I noticed all sorts of weird behavior.

1.  Since my replication jobs (with ASM/ME and ASM/VE) leverage the free pool space for the creation of temporary replica/snap that is created on the source array, this caused problems.  The biggest one was that my Exchange server would completely freeze during it’s ASM/ME snapshot process.  Perhaps I had this coming to me, because I deliberately configured it to use free pool space (as opposed to a replica reserve) for it’s replication.  How it behaved caught me off guard, and made it interesting enough for me to never want to cut it close on free pool space again.

2.  ASM/VE replica jobs also seems to behave odd with very little free pool space.  Again, this was self inflicted because of my configuration settings.  It left me desiring a feature that would allow you to set a threshold so that in the event of x amount of free pool space remaining, replication jobs would simply not run.  This goes for ASM/VE and ASM/ME.

Once I recovered that failed physical system, I was able to remove that VM I set aside for emergency turn up.  That increased my free pool space back up over 1TB, and all worked well from that point on. 

 

Timing

Lastly, one subject matter came up that doesn’t show up in any deployment guide I’ve seen.  The timing of all this protection shouldn’t be overlooked.  One wouldn’t want to stack several replication jobs on top of each other that use the same free pool space, but haven’t had the time to replicate.  Other snapshot jobs, replicas, consistency checks, traditional backups, etc should be well coordinated to keep overlap to a minimum.  If you are limited on resources, you may also be able to use timing to your advantage.  For instance, set your daily replica of your Exchange database to occur at 5:00am, and your daily snapshot to occur at 5:00pm.  That way, you have reduced your maximum loss period from 24 hours to 12 hours, just by offsetting the times.

Replication with an EqualLogic SAN; Part 3

 

In parts one and two of my journey in deploying replication between two EqualLogic PS arrays, I described some of the factors that came into play on how my topology would be designed, and the preparation that needed to occur to get to the point of testing the replication functions. 

Since my primary objective of this project was to provide offsite protection of my VMs and data in the event of a disaster at my primary facility,  I’ve limited my tests to validating that the data is recoverable from or at the remote site.   The logistics of failing over to a remote site (via tools like Site Recovery Manager) is way outside the scope of what I’m attempting to accomplish right now.  That will certainly be a fun project to work on some day, but for now, I’ll be content with knowing my data is replicating offsite successfully.

With that out of the way, let the testing begin…

 

Replication using Group Manager 

Just like snapshots, replication using the EqualLogic Group Manager is pretty straight forward.  However, in my case, using this mechanism would not produce snapshots or replicas that are file-system consistent of VM datastores, and would only be reliable for data that was not being accessed, or VM’s that were turned off.  So for the sake of brevity, I’m going to skip these tests.

 

ASM/ME Replica creation.

My ASM/ME replication tests will simulate how I plan on replicating the guest attached volumes within VMs.  Remember, these are replicas of the guest attached volumes  only – not of the VM. 

On each VM where I have guest attached volumes and the HITKit installed (Exchange, SQL, file servers, etc.) I launched ASM/ME to configure and create the new replicas.  I’ve scheduled them to occur at a time separate from the daily snapshots.

image

As you can see, there are two different icons used; one represents snapshots, and the other representing replicas.  Each snapshot and replica will show that the guest attached volumes (in this case, “E:\” and “F:\” )  have been protected using the Exchange VSS writer.  The two drives are being captured because I created the job from a “Collection” which makes most sense for Exchange and SQL systems that have DB files and transaction log data that you’d want to capture at the exact same time.  For the time being, I’m just letting them run once a day to collect some data on replication sizes.  ASM/ME is where recovery tasks would be performed on the guest attached volumes.

A tip for those who are running ASM/ME for Smartcopy snapshots or replication.  Define in your schedules a “keep count” number of snapshots or replicas that fall within the amount of snapshot reserve you have for that volume.  Otherwise, ASM/ME may take a very long time to start  the console and reconcile the existing smart copies, and you will also find those old snapshots in the “broken” container of ASM/ME.    The startup delay can be so long, it almost looks as if the application has hung, but it has not, so be patient.  (By the way, ASM/VE version 2.0, which should be used to protect your VMs, does not have any sort of “keep count” mechanism.  Lets keep our fingers crossed for that feature in version 3.0)

 

ASM/ME Replica restores

Working with replicas using ASM/ME is about as easy as it gets.  Just highlight the replica, and click on “Mount as read-only.”  Unlike a snapshot, you do not have the option to “restore” over the existing volume when its a replica.

image

ASM/ME will ask for a drive letter to assign that cloned replica to.  Once it’s mounted, you may do with the data as you wish.  Note that it will be in a read only state.  This can be changed later if needed.

When you are finished with the replica, you can click on the “Unmount and Resume Replication…”

image

ASM/ME will ask you if you want to keep the replica around after you unmount it.  To keep it, uncheck the box next to “Delete snapshot from the PS Series group…”

 

ASM/VE replica creation

ASM/VE replication, which will be the tool I use to protect my VMs, took a bit more time to set up correctly due to the way that ASM/VE likes to work.  I somehow missed the fact that one needed a second ASM/VE server to run at the target/offsite location for the ASM/VE server at the primary site to communicate with.  ASM/VE also seems to be hyper-sensitive to the version of Java installed on the ASM/VE servers.  Don’t get too anxious on updating to the latest version of Java.   Stick with a version recommended by EqualLogic.  I’m not sure what that officially would be, but I have been told by Tech Support that version 1.6 Update 18 is safe.

Unlike creating Smartcopy snapshots in ASM/VE, you cannot use the “Virtual Machines” view in ASM/VE to create Smartcopy replicas.  Only Datastores, Datacenters, and Clusters support replicas.  In my case, I will click  “Datastores” view to create Replicas.  Since I made the adjustments to where my VM’s were placed in the datastores, (see part 2, under “Preparing VMs for Replication”) it will still be clear as to which VMs will be replicated. 

image

After creating a Smartcopy replica of one of the datastores, I went to see how it looked.  In ASM/VE it appeared to complete successfully, and in SANHQ it also seemed to indicate a successful replica.  ASM/VE then gave a message of “contacting ASM peer” in the “replica status” column.  I’ve seen this occur right after I kicked off a replication job, but on successful jobs, it will disappear shortly.  If it doesn’t disappear, this can be a configuration issue (user accounts used to establish the connection due to known issues with ASM/VE 2.0), or caused by Java.

 

ASM/VE replica restores

At first, ASM/VE Smartcopy replicas didn’t make much sense to me, especially when it came to restores.  Perhaps I was attempting to think of them as a long distance snapshot, or that they might behave in the same way as ASM/ME replicas.  They work a bit  differently than that.  It’s not complicated, just different.

To work with the Smartcopy replica, you must first log into the ASM/VE server at the remote site.  From there, click on “Replication” > “Inbound Replicas” highlighting the replica from the datastore you are interested in.  Then it will present you with the options of “Failover from replica” and “clone from replica”  If you attempt to do this from the ASM/VE server from the primary site, these options never present themselves.  It makes sense to me after the fact, but took me a few tries to figure that out.  For my testing purposes, I’m focusing exclusively on “clone from replica.”  The EqualLogic documentation has good information on when each option can be used.

When choosing “Clone from Replica” it will have a checkbox for “Register new virtual machines.”  In my case, I uncheck this box, as my remote site will have just a few hosts running ESXi, and will not have a vCenter server to contact.

image

 

Once it is complete, access will need to be granted for the remote host in which you will want to try to mount the volume.  This can be accomplished by logging into the Group Manager of the target/offsite SAN group, selecting the cloned volume, and entering CHAP credentials, the IP address of the remote host, or the iSCSI initiator name. 

image

 

Jump right on over to the vSphere client for the remote host, and under “Configuration” > “Storage Adapters”  right click on your iSCSI software adapter, and select “Rescan”  When complete, go to “Configuration” > “Storage” and you will notice that it the volume does NOT show up.  Click “Add Storage” > “Disk/LUN”

image

 

When a datastore is recognized as a snapshot, it will present you with the following options.  See http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf for more information on which option to choose.

image

 

Once completed, the datastore that was replicated to the remote site and cloned so that it can be made available to the remote ESX/i host, should now be visible in “Datastores.” 

image

From there just browse the Datastore, drilling down to the folder of the VM you wish to turn up, highlight and right click the .vmx file, and select “Add to inventory.”  Your replicated VM should now be ready for you to power up.

If you are going to be cloning a VM replica living on the target array to a datastore, you will need to do one additional step if any of the VM’s have guest attached volumes using the guest iSCSI initiator.  At the target location, open up Group Manager, and drill down to “Replication Partners” > “[partnername]” and highlight the “Inbound” tab.  Expand the volume(s) that are associated with that VM.  Highlight the replica that you want, then click on “Clone replica”

image

This will allow you to reattach a guest attached volume to that VM.  Remember that I’m using the cloning feature simply to verify that my VM’s and data are replicating as they should.  Turning up systems for offsite use is a completely different ballgame, and not my goal – for right now anyway.

Depending on how you have your security and topology set up, and how connected your ESX host is offsite, your test VM you just turned up at the remote site may have the ability to contact Active Directory at your primary site, or guest attached volumes at your primary site.  This can cause problems for obvious reasons, so be careful to not let either one of those happen.  

 

Summary

While demonstrating some of these capabilities recently to the company, the audience (Developers, Managers, etc.) was very impressed with the demonstration, but their questions reminded me of just how little they understood the new model of virtualization, and shared storage.  This can be especially frustrating for Software Developers, who generally consider that there isn’t anything in IT that they don’t understand or know about.  They walked away impressed, and confused.  Mission accomplished.

Now that I’ve confirmed that my data and VM’s are replicating correctly, I’ll be building up some of my physical topology so that the offsite equipment has something to hook up to.  That will give me a chance to collect some some statistics on replication, which I will share on the next post.

Replication with an EqualLogic SAN; Part 2

 

In part 1 of this series, I outlined the decisions made in order to build a replicated environment.  On to the next step.  Racking up the equipment, migrating my data, and laying some groundwork for testing replication.

While waiting for the new equipment to arrive, I wanted to take care of a few things first:

1.  Update my existing PS5000E array up to the latest firmware.  This has never been a problem, other than the times that I’ve forgotten to log in as the default  ‘grpadmin’ account (the only account allowed to do firmware updates).  The process is slick, with no perceived interruption.

2.  Map out how my connections should be hooked up on the switches.  Redundant switches can only be redundant if you plug everything in the correct way.

3.  IP addressing.  It’s all too easy just to randomly assign IP addresses to a SAN.  It may be it’s own isolated network, but in the spirit of “design as if you know its going to change”  it might just be worth observing good addressing practices.  My SAN is on a /24 net block.  But I configure my IP addresses to respect potential address boundaries within that address range.  This is so that I can subnet or VLAN them down (e.g. /28)  later on, as well as helping to simplify rule sets on my ISA server that are based on address boundaries, and not a scattering of addresses.

Preparing the new array

Once the equipment arrived, it made most sense to get the latest firmware on the new array.  The quickest way is to set it up temporarily using the “initialize PS series  array” feature in the “Remote Setup Wizard” of the EqualLogic HITKit on a machine that can access the array.  Make it it’s own group, update the firmware, then reset the array to the factory defaults.  After completing the update and  typing “reset”  up comes the most interesting confirmation prompt you’ll ever see.  Instead of “Reset this array to factory defaults?”  [Y/N]”  where a “Y” or “N” is required, the prompt is “Reset this array to factory defaults? [n/DeleteAllMyDataNow]”  You can’t say that isn’t clear.  I applaud EqualLogic for making this very clear.  Wiping a SAN array clean is serious stuff, and definitely should be harder than typing a “Y” after the word “reset.” 

After the unit was reset, I was ready to join it to the existing group temporarily so that I could evacuate all of the data from the old array, and have it placed on the new array.  I plugged all of the array ports into the SAN switches, and turned it on.  Using the Remote Setup Wizard, I initialized the array, joined it to the group, then assigned and activated the rest of the NICs.   To migrate all of the data from one array to another, highlight the member with the data on it, then  click on “Delete Member”  Perhaps EqualLogic will revisit this term.  “Delete” just implies way too many things that doesn’t relate to this task.

The process of migrating data chugs along nicely.  VM’s and end users are none-the-wiser.  Once it is complete, the old array will remove itself from the group, and reset itself to the factory defaults.  It’s really impressive.  Actually, the speed and simplicity of the process gave me confidence when we need to add additional storage.

When the old array was back to it’s factory defaults,  I went back to initialize the array, and set it up as a new member in a new group.  This would be my new group that would be used for some preliminary replication testing, and will eventually live at the offsite location.

As for how this process compares with competing products, I’m the wrong guy to ask.  I’ve had zero experience with Fiber Channel SANs, and iSCSI SANs from other vendors.  But what I can say is that it was easy, and fast.

After configuring the replication between the two group, which consisted of configuring a few shared passwords between the the two groups, and configuring replication to occur on each volume, I was ready to try it out  …Almost.

 

Snapshots, and replication.

It’s worth taking a step back to review a few things on snapshots and how the EqualLogic handles them.  Replicas appear to work in a similar (but not exact) manner to snapshots, so many of the same principals apply.  Remember that snapshots can be made in several ways.

1.  The most basic are snapshots created in the EqualLogic group Manager.  These do exactly as they say, making a snapshot of the volume.  The problem is that they are not file-system consistent of VM datastores, and would only  be suitable for datastores in which all of the VM’s were turned off at the time the snapshot was made.

2.  To protect VM’s, “Autosnapshot manager VMware Edition” (ASM/VE) provides and ability to create a point-in-time snapshot, leveraging vCenter through VMware’s API, then does some nice tricks to make this an independent snapshot (well, of the datastore anyway) that you see in the EqualLogic group manager, under each respective volume.

3.  For VM’s with guest iscsi attached drives, there is “Autosnapshot Manager Microsoft Edition” (ASM/ME).  This great tool is installed with the Host Integration Toolkit (HITkit).  This makes application aware snapshots by taking advantage of the Microsoft Volume Shadow Copy Service Provider.  This is key for protecting SQL databases, Exchange databases, and even flat-file storage residing on guest attached drives.  It insures that all I/O is flushed when the snapshot is created.  I’ve grown quite partial to this type of snapshot, as its nearly instant, no interruption to the end users or services, and provides easy recoverability.  The downside is that it can only protect data on iscsi attached drives within the VM’s guest iscsi initiator, and must have a VSS writer specific to an application (e.g. Exchange, SQL) in order for it to talk correctly.  You cannot protect the VM itself with this type of snapshot.  Also, vCenter is generally unaware of these types of guest attached drives, so VCB backups and other apps that rely on vCenter won’t include these types of volumes.

So just as I use ASM/ME for smartcopy snapshots of my guest attached drives, and ASM/VE for my VM snapshots, I will use these tools in the similar way to create VM and application aware replica’s of the VM’s and the data.

ASM/VE tip:  Smartcopy snapshots using ASM/VE give the option to “Include PS series volumes accessed by guest iSCSI initiators.”  I do not use this option for a few very good reasons, and rely completely on ASM/ME for properly capturing guest attached volumes. 

Default replication settings in EqualLogic Group Manager

When one first configures a volume for replication, some of the EqualLogic defaults are set very generous.  The two settings to look out for are the “Total replica reserve” and the “Local replication reserve.”  The result is that these very conservative settings can chew up a lot of your free space on your SAN.  Assuming you have a decent amount of free space in your storage pool, and you choose to stagger some of your replication to occur at various times of the day, you can reduce the “Local replication reserve” down to it’s minimum, then click the checkbox for “allow temporary use of free pool space.”  This will minimize the impact of enabling replication on your array.

 

Preparing VM’s for replication

There were a few things I needed to do to prepare my VM’s to be replicated.  I wasn’t going to tackle all optimization techniques at this time, but thought it be best to get some of the easy things out of the way first.

1.  Reconfigure VM’s so that swap file is NOT in the same directory as the other VM files.  (This is the swap file for the VM at the hypervisor level; not to be confused with the guest OS swap file.)  First I created a volume in the EqualLogic group manager that would be dedicated for VM swap files, then made sure it was visible to each ESX host.  Then, simply configure the swap location at the cluster level in vCenter, followed by changing the setting on each ESX host.  The final step will be to power off and power on of each VM.  (A restart/reboot will not work for this step).  Once this is completed, you’ve eliminated a sizeable amount of data that doesn’t need to be replicated.

2.  Revamp datastores to reflect good practices with ASM/VE.  (I’d say “best practices” but I’m not sure if they exist, or if these qualify as such).  This is a step that takes into consideration how ASM/VE works, and how I use ASM/VE.   I’ve chosen to make my datastores reflect how my VM’s are arranged in vCenter.    Below is a screenshot in vCenter of the folders that contain all of my VMs.

image

Each folder has VMs in it that reside in just one particular datastore.  So for instance, the “Prodsystems-Dev” has a half dozen VM’s exclusively for our Development team.  These all reside in one datastore called VMFS05DS.  When a scheduled snapshot of a vcenter folder (e.g. “Prodsystems-Dev”) using ASM/VE, it will only hit those VM’s in that vcenter folder, and the single datastore that they reside on.  If it is not done this way, an ASM/VE snapshot of a folder containing VM’s that reside in different datastores will generate snapshots in each datastore.  This becomes terribly confusing to administer, especially when trying to recover a VM.

Since I recreated many of my volumes and datastores, I also jumped on the opportunity to make these new datastores with a 4MB block size instead of the the default 1MB block size.  Not really necessary in my situation, but based on the link here, it seems like a a good idea.

Once the volumes and the datastores were created and sized the way I desired, I used the storage vmotion function in vCenter to move each VM into the appropriate datastore to mimic my arrangement of folders in vCenter.  Because I’m sizing my datastores for a functional purpose, I have a mix of large and small datastores.  I probably would have made these the same size if it weren’t for how ASM/VE works.

The datastores are in place, and now mimic the arrangement of folders of VM’s in vCenter.  Now I’m ready to do a little test replication.  I’ll save that for the next post.

Suggested reading

Michael Ellerbeck has some great posts on his experiences with EqualLogic, replication, Dell switches, and optimization.    A lot of good links within the posts.
http://michaelellerbeck.com/

The Dell/EqualLogic Document Center has some good overview documents on how these components work together.  Lots of pretty pictures. 
http://www.equallogic.com/resourcecenter/documentcenter.aspx

Replication with an EqualLogic SAN; Part 1

 

Behind every great virtualized infrastructure is a great SAN to serve everything up.  I’ve had the opportunity to work with the Dell/EqualLogic iSCSI array for a while now, taking advantage of all of the benefits that the iSCSI based SAN array offers.  One feature that I haven’t been able to use is the built in replication feature.  Why?  I only had one array, and I didn’t have a location offsite to replicate to.

I suppose the real “part 1” of my replication project was selling the idea to the Management Team.  When it came to protecting our data and the systems that help generate that data, it didn’t take long for them to realize it wasn’t a matter of what we could afford, but how much we could afford to lose.  Having a building less than a mile away burn to the ground also helped the proposal.  On to the fun part; figuring out how to make all of this stuff work.

Of the many forms of replication out there, the most obvious one for me to start with is native SAN to SAN replication.  Why?  Well, it’s built right into the EqualLogic PS arrays, with no additional components to purchase, or license keys or fees to unlock features.  Other solutions exist, but it was best for me to start with the one I already had.

For companies with multiple sites, replication using EqualLogic arrays seems pretty straight forward.  For a company with nothing more than a single site, there are a few more steps that need to occur before the chance to start replicating data can happen.

 

Decision:  Colocation, or hosting provider

One of the first decisions that had to be made was if we wanted our data to be replicated to a Colocation (CoLo) with equipment that we owned and controlled, or with a hosting provider that can provide native PS array space and replication abilities.  Most hosting providers use a mixed variety of metering of data replicated to charge.  Accurately estimating your replication costs assumes you have a really good understanding of how much data will be replicated.  Unfortunately, this is difficult to know until you start replicating.  The pricing models of these hosting providers reminded me too much of a cab fare; never knowing what you are going to pay until you get the big bill when you are finished.    A CoLo with equipment that we owned fit with our current and future objectives much better.  We wanted fixed costs, and the ability to eventually do some hosting of critical services at the CoLo (web, ftp, mail relay, etc.), so it was an easy decision for us.

Our decision was to go with a CoLo facility located in the Westin Building in downtown Seattle.  Commonly known as the Seattle Internet Exchange (SIX), this is an impressive facility not only in it’s physical infrastructure, but how it provides peered interconnects directly from one ISP to another.  Our ISP uses this facility, so it worked out well to have our CoLo there as well

 

Decision:  Bandwidth

Bandwidth requirements for our replication was, and is still unknown, but I knew our bonded T1’s probably weren’t going to be enough, so I started exploring other options for higher speed access.  The first thing to check was to see if we qualified for a Metro-E or “Ethernet over Copper” (award winner for the dumbest name ever).  Metro-E removes the element of T-carrier lines along with any proprietary signaling, and provides internet access of point-to-point connections at Layer 2, instead of Layer 3.  We were not close enough to the carriers central office to get adequate bandwidth, and even if we were, it probably wouldn’t scale up to our future needs.

Enter QMOE, or Qwest Metro Optical Ethernet.  This solution feeds Layer 2 Ethernet to our building via fiber, offering the benefit of high bandwidth, low latency, that can be scaled easily.

Our first foray using QMOE is running a 30mbps point-to-point feed to our CoLo, and uplinked to the Internet.  If we need more later, there is no need to add or change equipment.  Just have them turn up the dial, and bill you accordingly.

 

Decision:  Topology

Topology planning has been interesting to say the least.  The best decision here depends on the use-case, and lets not forget, what’s left in the budget. 

Two options immediately presented themselves.

1.  Replication data from our internal SAN would be routed (Layer 3) to the SAN at the CoLo.

2.  Replication data  from our internal SAN would travel by way of a VLAN to the SAN at the CoLo.

If my need was only to send replication data to the CoLo, one could take advantage of that layer 2 connection, and send replication data directly to the CoLo, without it being routed.  This would mean that it would have to bypass any routers/firewalls in place, and have to be running to the CoLo on it’s own VLAN.

The QMOE network is built off of Cisco Equipment, so in order to utilize any VLANing from the CoLo to the primary facility, you must have Cisco switches that will support their VLAN trunking protocol (VTP).  I don’t have the proper equipment for that right now.

In my case, here is a very simplified illustration as to how the two topologies would look:

Routed Topology

image

 

Topology using VLANs

image

One may introduce more overhead and less effective throughput when the traffic becomes routed.  This is where a WAN optimization solution could come into play.  These solutions (SilverPeak, Riverbed, etc.) appear to be extremely good at improving effective throughput across many types of WAN connections.  These of course must sit at the correct spot in the path to the destination.  The units are often priced on bandwidth speed, and while they are very effective, are also quite an investment.  But they work at layer 3, and must in between the source and a router at both ends of the communication path; something that wouldn’t exist on a Metro-E circuit where VLANing was used to transmit replicated data.

The result is that for right now, I have chosen to go with a routed arrangement with no WAN optimization.  This does not differ too much from a traditional WAN circuit, other than my latencies should be much better.  The next step if our needs are not sufficiently met would be to invest in a couple of Cisco switches, then send replication data over it’s own VLAN to the CoLo, similar to the illustration above.

 

The equipment

My original SAN array is an EqualLogic PS5000e connected to a couple of Dell PowerConnect 5424 switches.  My new equipment closely mirrors this, but is slightly better;  An EqualLogic PS6000e and two PowerConnect 6224 switches.  Since both items will scale a bit better, I’ve decided to change out the existing array and switches with the new equipment.

 

Some Lessons learned so far

If you are changing ISPs, and your old ISP has authoritative control of your DNS zone files, make sure your new ISP has the zone file EXACTLY the way you need it.  Then confirm it one more time.  Spelling errors and omissions in DNS zone files doesn’t work out very well, especially when you factor in the time it takes for the corrections to propagate through the net.  (Usually up to 72 hours, but can feel like a lifetime when your customers can’t get to your website) 

If you are going to go with a QMOE or Metro-E circuit, be mindful that you might have to force the external interface on your outermost equipment (in our case, the firewall/router, but could be a managed switch as well) to negotiate to 100mbps full duplex.  Auto negotiation apparently doesn’t work to well on many Metro-E implementations, and can cause fragmentation that will reduce your effective throughput by quite a bit.  This is exactly what we saw.  Fortunately it was an easy fix.

 

Stay tuned for what’s next…