Replication with an EqualLogic SAN; Part 3

 

In parts one and two of my journey in deploying replication between two EqualLogic PS arrays, I described some of the factors that came into play on how my topology would be designed, and the preparation that needed to occur to get to the point of testing the replication functions. 

Since my primary objective of this project was to provide offsite protection of my VMs and data in the event of a disaster at my primary facility,  I’ve limited my tests to validating that the data is recoverable from or at the remote site.   The logistics of failing over to a remote site (via tools like Site Recovery Manager) is way outside the scope of what I’m attempting to accomplish right now.  That will certainly be a fun project to work on some day, but for now, I’ll be content with knowing my data is replicating offsite successfully.

With that out of the way, let the testing begin…

 

Replication using Group Manager 

Just like snapshots, replication using the EqualLogic Group Manager is pretty straight forward.  However, in my case, using this mechanism would not produce snapshots or replicas that are file-system consistent of VM datastores, and would only be reliable for data that was not being accessed, or VM’s that were turned off.  So for the sake of brevity, I’m going to skip these tests.

 

ASM/ME Replica creation.

My ASM/ME replication tests will simulate how I plan on replicating the guest attached volumes within VMs.  Remember, these are replicas of the guest attached volumes  only – not of the VM. 

On each VM where I have guest attached volumes and the HITKit installed (Exchange, SQL, file servers, etc.) I launched ASM/ME to configure and create the new replicas.  I’ve scheduled them to occur at a time separate from the daily snapshots.

image

As you can see, there are two different icons used; one represents snapshots, and the other representing replicas.  Each snapshot and replica will show that the guest attached volumes (in this case, “E:\” and “F:\” )  have been protected using the Exchange VSS writer.  The two drives are being captured because I created the job from a “Collection” which makes most sense for Exchange and SQL systems that have DB files and transaction log data that you’d want to capture at the exact same time.  For the time being, I’m just letting them run once a day to collect some data on replication sizes.  ASM/ME is where recovery tasks would be performed on the guest attached volumes.

A tip for those who are running ASM/ME for Smartcopy snapshots or replication.  Define in your schedules a “keep count” number of snapshots or replicas that fall within the amount of snapshot reserve you have for that volume.  Otherwise, ASM/ME may take a very long time to start  the console and reconcile the existing smart copies, and you will also find those old snapshots in the “broken” container of ASM/ME.    The startup delay can be so long, it almost looks as if the application has hung, but it has not, so be patient.  (By the way, ASM/VE version 2.0, which should be used to protect your VMs, does not have any sort of “keep count” mechanism.  Lets keep our fingers crossed for that feature in version 3.0)

 

ASM/ME Replica restores

Working with replicas using ASM/ME is about as easy as it gets.  Just highlight the replica, and click on “Mount as read-only.”  Unlike a snapshot, you do not have the option to “restore” over the existing volume when its a replica.

image

ASM/ME will ask for a drive letter to assign that cloned replica to.  Once it’s mounted, you may do with the data as you wish.  Note that it will be in a read only state.  This can be changed later if needed.

When you are finished with the replica, you can click on the “Unmount and Resume Replication…”

image

ASM/ME will ask you if you want to keep the replica around after you unmount it.  To keep it, uncheck the box next to “Delete snapshot from the PS Series group…”

 

ASM/VE replica creation

ASM/VE replication, which will be the tool I use to protect my VMs, took a bit more time to set up correctly due to the way that ASM/VE likes to work.  I somehow missed the fact that one needed a second ASM/VE server to run at the target/offsite location for the ASM/VE server at the primary site to communicate with.  ASM/VE also seems to be hyper-sensitive to the version of Java installed on the ASM/VE servers.  Don’t get too anxious on updating to the latest version of Java.   Stick with a version recommended by EqualLogic.  I’m not sure what that officially would be, but I have been told by Tech Support that version 1.6 Update 18 is safe.

Unlike creating Smartcopy snapshots in ASM/VE, you cannot use the “Virtual Machines” view in ASM/VE to create Smartcopy replicas.  Only Datastores, Datacenters, and Clusters support replicas.  In my case, I will click  “Datastores” view to create Replicas.  Since I made the adjustments to where my VM’s were placed in the datastores, (see part 2, under “Preparing VMs for Replication”) it will still be clear as to which VMs will be replicated. 

image

After creating a Smartcopy replica of one of the datastores, I went to see how it looked.  In ASM/VE it appeared to complete successfully, and in SANHQ it also seemed to indicate a successful replica.  ASM/VE then gave a message of “contacting ASM peer” in the “replica status” column.  I’ve seen this occur right after I kicked off a replication job, but on successful jobs, it will disappear shortly.  If it doesn’t disappear, this can be a configuration issue (user accounts used to establish the connection due to known issues with ASM/VE 2.0), or caused by Java.

 

ASM/VE replica restores

At first, ASM/VE Smartcopy replicas didn’t make much sense to me, especially when it came to restores.  Perhaps I was attempting to think of them as a long distance snapshot, or that they might behave in the same way as ASM/ME replicas.  They work a bit  differently than that.  It’s not complicated, just different.

To work with the Smartcopy replica, you must first log into the ASM/VE server at the remote site.  From there, click on “Replication” > “Inbound Replicas” highlighting the replica from the datastore you are interested in.  Then it will present you with the options of “Failover from replica” and “clone from replica”  If you attempt to do this from the ASM/VE server from the primary site, these options never present themselves.  It makes sense to me after the fact, but took me a few tries to figure that out.  For my testing purposes, I’m focusing exclusively on “clone from replica.”  The EqualLogic documentation has good information on when each option can be used.

When choosing “Clone from Replica” it will have a checkbox for “Register new virtual machines.”  In my case, I uncheck this box, as my remote site will have just a few hosts running ESXi, and will not have a vCenter server to contact.

image

 

Once it is complete, access will need to be granted for the remote host in which you will want to try to mount the volume.  This can be accomplished by logging into the Group Manager of the target/offsite SAN group, selecting the cloned volume, and entering CHAP credentials, the IP address of the remote host, or the iSCSI initiator name. 

image

 

Jump right on over to the vSphere client for the remote host, and under “Configuration” > “Storage Adapters”  right click on your iSCSI software adapter, and select “Rescan”  When complete, go to “Configuration” > “Storage” and you will notice that it the volume does NOT show up.  Click “Add Storage” > “Disk/LUN”

image

 

When a datastore is recognized as a snapshot, it will present you with the following options.  See http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf for more information on which option to choose.

image

 

Once completed, the datastore that was replicated to the remote site and cloned so that it can be made available to the remote ESX/i host, should now be visible in “Datastores.” 

image

From there just browse the Datastore, drilling down to the folder of the VM you wish to turn up, highlight and right click the .vmx file, and select “Add to inventory.”  Your replicated VM should now be ready for you to power up.

If you are going to be cloning a VM replica living on the target array to a datastore, you will need to do one additional step if any of the VM’s have guest attached volumes using the guest iSCSI initiator.  At the target location, open up Group Manager, and drill down to “Replication Partners” > “[partnername]” and highlight the “Inbound” tab.  Expand the volume(s) that are associated with that VM.  Highlight the replica that you want, then click on “Clone replica”

image

This will allow you to reattach a guest attached volume to that VM.  Remember that I’m using the cloning feature simply to verify that my VM’s and data are replicating as they should.  Turning up systems for offsite use is a completely different ballgame, and not my goal – for right now anyway.

Depending on how you have your security and topology set up, and how connected your ESX host is offsite, your test VM you just turned up at the remote site may have the ability to contact Active Directory at your primary site, or guest attached volumes at your primary site.  This can cause problems for obvious reasons, so be careful to not let either one of those happen.  

 

Summary

While demonstrating some of these capabilities recently to the company, the audience (Developers, Managers, etc.) was very impressed with the demonstration, but their questions reminded me of just how little they understood the new model of virtualization, and shared storage.  This can be especially frustrating for Software Developers, who generally consider that there isn’t anything in IT that they don’t understand or know about.  They walked away impressed, and confused.  Mission accomplished.

Now that I’ve confirmed that my data and VM’s are replicating correctly, I’ll be building up some of my physical topology so that the offsite equipment has something to hook up to.  That will give me a chance to collect some some statistics on replication, which I will share on the next post.

Replication with an EqualLogic SAN; Part 2

 

In part 1 of this series, I outlined the decisions made in order to build a replicated environment.  On to the next step.  Racking up the equipment, migrating my data, and laying some groundwork for testing replication.

While waiting for the new equipment to arrive, I wanted to take care of a few things first:

1.  Update my existing PS5000E array up to the latest firmware.  This has never been a problem, other than the times that I’ve forgotten to log in as the default  ‘grpadmin’ account (the only account allowed to do firmware updates).  The process is slick, with no perceived interruption.

2.  Map out how my connections should be hooked up on the switches.  Redundant switches can only be redundant if you plug everything in the correct way.

3.  IP addressing.  It’s all too easy just to randomly assign IP addresses to a SAN.  It may be it’s own isolated network, but in the spirit of “design as if you know its going to change”  it might just be worth observing good addressing practices.  My SAN is on a /24 net block.  But I configure my IP addresses to respect potential address boundaries within that address range.  This is so that I can subnet or VLAN them down (e.g. /28)  later on, as well as helping to simplify rule sets on my ISA server that are based on address boundaries, and not a scattering of addresses.

Preparing the new array

Once the equipment arrived, it made most sense to get the latest firmware on the new array.  The quickest way is to set it up temporarily using the “initialize PS series  array” feature in the “Remote Setup Wizard” of the EqualLogic HITKit on a machine that can access the array.  Make it it’s own group, update the firmware, then reset the array to the factory defaults.  After completing the update and  typing “reset”  up comes the most interesting confirmation prompt you’ll ever see.  Instead of “Reset this array to factory defaults?”  [Y/N]”  where a “Y” or “N” is required, the prompt is “Reset this array to factory defaults? [n/DeleteAllMyDataNow]”  You can’t say that isn’t clear.  I applaud EqualLogic for making this very clear.  Wiping a SAN array clean is serious stuff, and definitely should be harder than typing a “Y” after the word “reset.” 

After the unit was reset, I was ready to join it to the existing group temporarily so that I could evacuate all of the data from the old array, and have it placed on the new array.  I plugged all of the array ports into the SAN switches, and turned it on.  Using the Remote Setup Wizard, I initialized the array, joined it to the group, then assigned and activated the rest of the NICs.   To migrate all of the data from one array to another, highlight the member with the data on it, then  click on “Delete Member”  Perhaps EqualLogic will revisit this term.  “Delete” just implies way too many things that doesn’t relate to this task.

The process of migrating data chugs along nicely.  VM’s and end users are none-the-wiser.  Once it is complete, the old array will remove itself from the group, and reset itself to the factory defaults.  It’s really impressive.  Actually, the speed and simplicity of the process gave me confidence when we need to add additional storage.

When the old array was back to it’s factory defaults,  I went back to initialize the array, and set it up as a new member in a new group.  This would be my new group that would be used for some preliminary replication testing, and will eventually live at the offsite location.

As for how this process compares with competing products, I’m the wrong guy to ask.  I’ve had zero experience with Fiber Channel SANs, and iSCSI SANs from other vendors.  But what I can say is that it was easy, and fast.

After configuring the replication between the two group, which consisted of configuring a few shared passwords between the the two groups, and configuring replication to occur on each volume, I was ready to try it out  …Almost.

 

Snapshots, and replication.

It’s worth taking a step back to review a few things on snapshots and how the EqualLogic handles them.  Replicas appear to work in a similar (but not exact) manner to snapshots, so many of the same principals apply.  Remember that snapshots can be made in several ways.

1.  The most basic are snapshots created in the EqualLogic group Manager.  These do exactly as they say, making a snapshot of the volume.  The problem is that they are not file-system consistent of VM datastores, and would only  be suitable for datastores in which all of the VM’s were turned off at the time the snapshot was made.

2.  To protect VM’s, “Autosnapshot manager VMware Edition” (ASM/VE) provides and ability to create a point-in-time snapshot, leveraging vCenter through VMware’s API, then does some nice tricks to make this an independent snapshot (well, of the datastore anyway) that you see in the EqualLogic group manager, under each respective volume.

3.  For VM’s with guest iscsi attached drives, there is “Autosnapshot Manager Microsoft Edition” (ASM/ME).  This great tool is installed with the Host Integration Toolkit (HITkit).  This makes application aware snapshots by taking advantage of the Microsoft Volume Shadow Copy Service Provider.  This is key for protecting SQL databases, Exchange databases, and even flat-file storage residing on guest attached drives.  It insures that all I/O is flushed when the snapshot is created.  I’ve grown quite partial to this type of snapshot, as its nearly instant, no interruption to the end users or services, and provides easy recoverability.  The downside is that it can only protect data on iscsi attached drives within the VM’s guest iscsi initiator, and must have a VSS writer specific to an application (e.g. Exchange, SQL) in order for it to talk correctly.  You cannot protect the VM itself with this type of snapshot.  Also, vCenter is generally unaware of these types of guest attached drives, so VCB backups and other apps that rely on vCenter won’t include these types of volumes.

So just as I use ASM/ME for smartcopy snapshots of my guest attached drives, and ASM/VE for my VM snapshots, I will use these tools in the similar way to create VM and application aware replica’s of the VM’s and the data.

ASM/VE tip:  Smartcopy snapshots using ASM/VE give the option to “Include PS series volumes accessed by guest iSCSI initiators.”  I do not use this option for a few very good reasons, and rely completely on ASM/ME for properly capturing guest attached volumes. 

Default replication settings in EqualLogic Group Manager

When one first configures a volume for replication, some of the EqualLogic defaults are set very generous.  The two settings to look out for are the “Total replica reserve” and the “Local replication reserve.”  The result is that these very conservative settings can chew up a lot of your free space on your SAN.  Assuming you have a decent amount of free space in your storage pool, and you choose to stagger some of your replication to occur at various times of the day, you can reduce the “Local replication reserve” down to it’s minimum, then click the checkbox for “allow temporary use of free pool space.”  This will minimize the impact of enabling replication on your array.

 

Preparing VM’s for replication

There were a few things I needed to do to prepare my VM’s to be replicated.  I wasn’t going to tackle all optimization techniques at this time, but thought it be best to get some of the easy things out of the way first.

1.  Reconfigure VM’s so that swap file is NOT in the same directory as the other VM files.  (This is the swap file for the VM at the hypervisor level; not to be confused with the guest OS swap file.)  First I created a volume in the EqualLogic group manager that would be dedicated for VM swap files, then made sure it was visible to each ESX host.  Then, simply configure the swap location at the cluster level in vCenter, followed by changing the setting on each ESX host.  The final step will be to power off and power on of each VM.  (A restart/reboot will not work for this step).  Once this is completed, you’ve eliminated a sizeable amount of data that doesn’t need to be replicated.

2.  Revamp datastores to reflect good practices with ASM/VE.  (I’d say “best practices” but I’m not sure if they exist, or if these qualify as such).  This is a step that takes into consideration how ASM/VE works, and how I use ASM/VE.   I’ve chosen to make my datastores reflect how my VM’s are arranged in vCenter.    Below is a screenshot in vCenter of the folders that contain all of my VMs.

image

Each folder has VMs in it that reside in just one particular datastore.  So for instance, the “Prodsystems-Dev” has a half dozen VM’s exclusively for our Development team.  These all reside in one datastore called VMFS05DS.  When a scheduled snapshot of a vcenter folder (e.g. “Prodsystems-Dev”) using ASM/VE, it will only hit those VM’s in that vcenter folder, and the single datastore that they reside on.  If it is not done this way, an ASM/VE snapshot of a folder containing VM’s that reside in different datastores will generate snapshots in each datastore.  This becomes terribly confusing to administer, especially when trying to recover a VM.

Since I recreated many of my volumes and datastores, I also jumped on the opportunity to make these new datastores with a 4MB block size instead of the the default 1MB block size.  Not really necessary in my situation, but based on the link here, it seems like a a good idea.

Once the volumes and the datastores were created and sized the way I desired, I used the storage vmotion function in vCenter to move each VM into the appropriate datastore to mimic my arrangement of folders in vCenter.  Because I’m sizing my datastores for a functional purpose, I have a mix of large and small datastores.  I probably would have made these the same size if it weren’t for how ASM/VE works.

The datastores are in place, and now mimic the arrangement of folders of VM’s in vCenter.  Now I’m ready to do a little test replication.  I’ll save that for the next post.

Suggested reading

Michael Ellerbeck has some great posts on his experiences with EqualLogic, replication, Dell switches, and optimization.    A lot of good links within the posts.
http://michaelellerbeck.com/

The Dell/EqualLogic Document Center has some good overview documents on how these components work together.  Lots of pretty pictures. 
http://www.equallogic.com/resourcecenter/documentcenter.aspx