It is never any fun getting left behind in IT. Major upgrades every year or two might not be a big deal if you only had to deal with one piece of software, but take a look at most software inventories, and you’ll see possibly dozens of enterprise level applications and supporting services that all contribute to the chaos. It can be overwhelming for just one person to handle. While you may be perfectly justified in holding off on specific upgrades, there still seems to be a bit of guilt around doing so. You might have ample business and technical factors to support such decisions, and a well crafted message providing clear reasons to stakeholders. The business and political pressures ultimately win out, and you find yourself addressing the more customer/user facing application upgrades before the behind-the-scenes tools that power it all.
That is pretty much where I stood with my virtualized infrastructure. My last major upgrade was to vSphere 4.0. Sure, I had visions of keeping up with every update and patch, but a little time passed, and several hundred distractions later, I found myself left behind. When vSphere 4.1 came out, I also had every intention of upgrading. However, I was one of the legions of users who had a vCenter server running on a 32bit OS, and that complicated matters a little bit. I looked at the various publications and posts on the upgrade paths and experiences. Nothing seemed quite as easy as I was hoping for, so I did what came easiest to my already packed schedule; nothing. I wondered just how many Administrators found themselves in the same predicament; not touching an aging, albeit perfectly fine running system.
My ESX 4.0 cluster served my organization well, but times change, and so do needs. A few things come up to kick-start the desire to upgrade.
- I needed to deploy a pilot VDI project, fast. (more about this in later posts)
- We were a victim of our own success with virtualization, and I needed to squeeze even more power and efficiency out of our investment in our infrastructure.
Both are pretty good reasons to upgrade, and while I would have loved to do my typical due diligence on every possible option, I needed a fast track. My move to vSphere 5.0 was really just a prerequisite of sorts to my work with VDI.
But how should I go about an upgrade?
Do I update my 4.0 hosts to the latest update that would be eligible for an upgrade path to 5.0, and if so, how much work would that be? Should I transition to a new vCenter server, migrating the database, then run a mixed environment of ESX hosts running with different versions? What sort of problems would that introduce? After conferring with a trusted colleague of mine who always seems to have pragmatic sensibilities when it comes to virtualization, I decided which option was going to be the best for me. I opted not to do any upgrade, and simply transition to a pristine new cluster. It looked something like this:
- Take a host (either new, or by removing an existing one from the cluster), and build it up with ESXi 5.0.
- Build up a new 64bit VM for running a brand new vCenter, and configure as needed.
- Remove one VM at a time from the old cluster by powering them down, remove from inventory, add to the new cluster.
- Once enough VM’s have been removed, take another host, remove from the old cluster, rebuild as ESXi 5.0, and add to the new cluster.
- Repeat until finished.
For me, the decision to start from scratch won out. Why?
- I could build up a pristine vCenter server, with a database that wasn’t going to carry over any unwanted artifacts of my previous installation.
- I could easily set up the new vCenter to emulate my old settings. Folders, EVC settings, resource pools, etc.
- I could transition or build up my supporting VM’s or appliances to my new infrastructure to make sure they worked before committing to the transition.
- I could afford a simple restart of each VM as I transitioned it to a new cluster. I used this as an opportunity to update the VMware Tools when added to the new inventory.
- I was willing to give up historical data in my old vSphere 4.0 cluster for the sake of simplicity of the plan and cleanliness of the configuration.
- Predictability. I didn’t have to read a single white paper or discussion thread on database migrations or troubles with DSNs.
- I have a well documented ESX host configuration that is not terribly complex, and easy to recreate across 6 hosts.
- I just happened to have purchased an additional blade and license of ESX, so it was an ideal time to introduce it to my environment.
- I could get my entire setup working, then get my licensing figured out after it’s all complete.
You’ll notice that one option similar to this approach would have been to simply remove a host of running VM’s out of the existing cluster, and add it to the new cluster. This may have been just as good of a plan, as it would have avoided the need to manually shut down and remove each VM one at a time during the transition. However, I would have needed to run a mix of ESX 4.0 and 5.0 hosts in the new cluster. I didn’t want to carry anything over from the old setup. I would have needed to upgrade or rebuild the host anyway, and I had to restart each VM to make sure it was running the latest tools. If for nothing other than clarity of mind, my approach seemed best for me.
Prior to beginning the transition, I needed to update my Dell EqualLogic firmware to 5.1.2. A collection of very nice improvements made this a nice upgrade, but a requirement for what I wanted to do. While the upgrade itself went smoothly, it did re-introduce an issue or two. The folks at Dell EqualLogic are aware of this, and are working to address it hopefully in their next release. The combination of the firmware upgrade, and vSphere 5 allowed me to use the latest and greatest tools from EqualLogic, primarily the Host Integration Tools VMWare Edition (HIT/VE) and the storage integration in vSphere thanks to VASA. Although, as of this writing, EqualLogic does not have a full production release of their MultiPathing Extension Module (MEM) for vSphere 5.0. The EPA version was just released, but I’ll probably wait for the full release of MEM to come out before I apply it to the hosts in the cluster.
While I was eager to finish the transition, I didn’t want to prematurely create any problems. I took a page from my own lessons learned during my upgrade to ESX 4.0, and exercised some restraint when it came to updating my Virtual Hardware for each VM to version 8. My last update of Virtual Hardware levels in each VM caused some unexpected results, as I shared in “Side effects of upgrading VM’s to Virtual Hardware 7 in vSphere” Apparently, I wasn’t the only one who ran into issues, because that post has statistically been my all time most popular post. The abilities of Virtual Hardware 8 powered VMs are pretty neat, but I’m in no rush to make any virtual hardware changes to some of my key production systems, especially those noted.
So, how did it work out? The actual process completed without a single major hang-up, and am thrilled with the result. The irony here is that even though vSphere provides most of the intelligence behind my entire infrastructure, and does things that are mind bogglingly cool, it was so much easier to upgrade than say, SharePoint, AD, Exchange, or some other enterprise software. Great technologies are great because they work like you think they should. No exception here. If you are considering a move to vSphere 5.0, and are a little behind on your old infrastructure, this upgrade approach might be worth considering.
Now, onto that little VDI project…
A great resource on setting up SQL 2008 R2 for vCenter
How to Install Microsoft SQL Server 2008 R2 for VMware vCenter 5
Installing vCenter 5 Best Practices
A little VMFS 5.0 info
Information on the EqualLogic Multipathing Extension Module (MEM), and if you are an EqualLogic customer, why you should care.