Side effects of upgrading VM’s to Virtual Hardware 7 in vSphere

 

New Years day, 2010 was a day of opportunity for me.  With the office closed, I jumped on the chance of upgrading my ESX cluster from 3.5 to vSphere 4.0 U1.  Those in IT know that a weekend flanked by a holiday is your friend; offering a bit more time to resolve problems, or back out entirely.  The word on the street was the upgrade to vSphere was a smooth one, but I wanted extra time, just in case.  The plan was laid out, the prep work had been done, and I was ready to pull the trigger.  The steps fell into 4 categories.

1.  Upgrade vCenter 
2.  Upgrade ESX hosts 
3.  Upgrade VM Tools and virtual hardware 
4.  Tidy up by taking care of licensing, VCB, and confirm that everything works as expected.

Pretty straightforward, right?  Well, in fact it was.  VMware should be commended for the fine job they did in making the transition  relatively easy.  I didn’t even need the 3 day weekend to do it.  Did I overlook anything?  Of course.  In particular, I didn’t pay attention to how the Virtual Hardware 7 upgrade was going to affect my systems at the application layer.

At the heart of the matter is that when the Virtual Hardware upgrade is made, VMware will effortlessly disable and hide those old NICs, add new NICs in there, and then reassign the addressing info that were on the old cards.  It’s pretty slick, but  does cause a few problems.

  1. Static IP addressing gets transferred over correctly, but the other NIC settings do not.  Do you have all but IPv4 disabled (e.g. Client for Microsoft networks, QoS, etc.) for your iSCSI connections to your SAN?  Do you have NetBIOS over TCP/IP shut off as well?  Well, after the Hardware 7 upgrade, all of those services will be turned on.  Do you have IPv6 disabled on your LAN NIC?  (no, it’s not a commentary on my dislike of IPv6, but there are many legitimate reasons to do this).  That will be turned back on.
  2. NIC binding order will reset itself to an order you probably do not want.  These affect services in a big way, especially when you factor in side effect #1.  (Please note that none of my systems were multi-homed on the LAN side.  The additional NIC’s for each VM were simply for iSCSI based storage access using a guest iSCSI initiator.)
  3. Guest iSCSI initiator settings *may* be different.  A few of the most common reactions I saw were the “Discovery” tab had duplicate entries, and the “Volumes and Devices” tab no longer had the drive letter of the guest initiated drive.  This is necessary to have in order for some services to not jump the gun too early.
  4. Duplicate reverse DNS records.  I stumbled upon this after the update based off of some errors I was seeing.  Many mysteries can occur with orphaned, duplicate reverse DNS records.  Get rid of ‘em as soon as you see them.  It won’t hurt to check your WINs service and clear that out as well, and keep an eye on those machines with DHCP reservations.
  5. In Microsoft’s Operating Systems, their network configuration subsystem generates a Global Unique Identifier (GUID) for each NIC that is partially based on the MAC address of the NIC.  This GUID may or may not be used in applications like Exchange, CRM, Sharepoint, etc.  When the NIC changes, the GUID changes.  …and services may break.

Items 1 through 4 are pretty easy to handle – even easier when you know what’s coming.  Item #5 is a total wildcard.

What’s interesting about this is that it created the kinds of problems that in many ways are the most problematic for Administrators; where you think it’s running fine, but it’s not.  Most things work long enough to make your VM snapshots no longer relevant, if you plan on the quick fix. 

Now, in hindsight, I see that some of this was documented, as much of this type of thing comes up in P2V, and V2V conversions.  However, much of it was not.  My hope is to save someone else a little heartache.  Here is what I did after each VM was upgraded to Virtual Hardware 7.

All VM’s

Removing old NICs

  1. Open up command shell option for the "run as administrator". In Shell, type set devmgr_show_nonpresent_devices=1, then hit Enter
  2. Type start devmgmt.msc then hit Enter
  3. Click View > Show hidden devices
  4. Expand Network Adapter tree
  5. Right click grayed out NICs, and click uninstall
  6. Click View > Show hidden devices to untoggle.
  7. Exit out of application
  8. type set devmgr_show_nonpresent_devices=0, then hit Enter.

Change binding order of new NICs

  1. Right click on Network, then click Properties > Manage Network Connections
  2. Rename NICs to "LAN" "iSCSI-1" and "iSCSI-2" or whatever you wish.
  3. Change binding order to have LAN NIC at the top of the list
  4. Disable IPV6 on LAN NIC
  5. For iSCSI NIC’s, disable all but TCP/IPv4. Verify static IP (w/ no gateway or DNS servers), verify "Register this connections address in DNS" is unchecked, and Disable NetBIOS over TCP/IP

Verify/Reset iSCSI initator settings

  1. Open iSCSI initiator
  2. Verify that in the Discovery tab, just one entry is in there; x.x.x.x:3260, Default Adapter, Default IP address
  3. Verify Targets and favorite Targets tab
  4. On Volumes and Devices tab, click on "Autoconfigure" to repopulate entries (clears up mangled entries on some).
  5. Click OK, and restart machine.

 DNS and DHCP

  1. Remove duplicate reverse lookup records for systems upgraded to Virtual Hardware 7
  2. For systems that have DHCP reserved addresses, jump into your DHCP manager, and modify as needed.

Exchange 2007 Server

Exchange seemed operational at first, but the more I looked at the event logs, the more I realized I needed to do some clean up

After performing the tasks under the “All VMs” fix, most issues went away.  However, one stuck around.  Because of the GUID issue, if your Exchange Server is running the transport server role, it will flag you with Event ID: 205 errors.  It is still looking for the old GUID.  Here is what to do.

First, determine status of the NICs

[PS] C:\Windows\System32>get-networkconnectioninfo

Name : Intel(R) PRO/1000 MT Network Connection #4
DnsServers : {192.168.0.9, 192.168.0.10}
IPAddresses : {192.168.0.20}
AdapterGuid : 5ca83cae-0519-43f8-adfe-eefca0f08a04
MacAddress : 00:50:56:8B:5F:97

Name : Intel(R) PRO/1000 MT Network Connection #5
DnsServers : {}
IPAddresses : {10.10.0.193}
AdapterGuid : 6d72814a-0805-4fca-9dee-4bef87aafb70
MacAddress : 00:50:56:8B:13:3F

Name : Intel(R) PRO/1000 MT Network Connection #6
DnsServers : {}
IPAddresses : {10.10.0.194}
AdapterGuid : 564b8466-dbe2-4b15-bd15-aafcde21b23d
MacAddress : 00:50:56:8B:2C:22

Then get the transport server info

[PS] C:\Windows\System32>get-transportserver | ft -wrap Name,*DNS*

image 

Then, set the transport server info correctly

set-transportserver SERVERNAME -ExternalDNSAdapterGuid 5ca83cae-0519-43f8-adfe-eefca0f08a04

set-transportserver SERVERNAME -InternalDNSAdapterGuid 5ca83cae-0519-43f8-adfe-eefca0f08a04

 

Sharepoint Server

Thank heavens we are just in the pre-deployment stages for Sharepoint.  Our Sharepoint Consultant asked what I did to mess up the Central Administration site, as it could no longer be accessed (the other sites were fine however).  After a bizarre series of errors, I thought it would be best to restore it from snapshot and test what was going on.  The virtual hardware upgrade definitely broke the connection, but so did removing the existing NIC, and adding another one.   As of now, I can’t determine that it is in fact a problem with the NIC GUID, but it sure seems to be.   My only working solution in the time allowed was to keep the server at Hardware level 4, and build up a new Sharepoint Front End Server. 

One might question why, with the help of vCenter,  a MAC address can’t be forced on the server.   Even though one is able to get the last 12 characters of the GUID (representing the MAC address) the first part is different.  It makes sense because the new device is different.  The applications care about the GUID as a whole, not just the MAC address.

Here is how you can find the GUID of the system’s NIC in question.  Run this BEFORE you perform a virtual hardware update, and save it for future reference in case you run into problems.  Also make note of where it exists in the registry.  It’s not a solution to the issue I had with Sharepoint, but its worth knowing about.

C:\Users\administrator.DOMAINNAME>net config rdr
Computer name
\\SERVERNAME
Full Computer name SERVERNAME.domainname.lan
User name Administrator
Workstation active on
NetbiosSmb (000000000000)
NetBT_Tcpip_{56BB9E44-EA93-43C3-B7B3-88DD478E9F73} (0050568B60BE)
Software version Windows Server (R) 2008 Standard
Workstation domain DOMAINNAME
Workstation Domain DNS Name domainname.lan
Logon domain DOMAINNAME
COM Open Timeout (sec) 0
COM Send Count (byte) 16
COM Send Timeout (msec) 250

 

 In hindsight…

Let me be clear that there is really not much that VMWare can do about this.  The same troubles would occur on a physical machine if you needed to change out Network cards.  The difference is, that it’s not done as easily, as often, or as transparently as on a VM.

If I were to do it over again (and surely I will the day when VMware releases a major upgrade again), I would have done a few things different.

  1. Note all existing application errors and warnings on each server prior to the upgrade, just so I don’t try to ponder if that warning I’m starring at had existed before the upgrade.
  2. Note those GUID’s before you upgrade.  You could always capture it after restoring from a snapshot if you do run into problems, but save yourself a little time and get this down on paper ahead of time.
  3. Take the virtual hardware upgrade slowly.  After everything else went pretty smooth, I was in a “get it done” mentality.  Although the results were not  catastrophic, I could have done better minimizing the issues.
  4. Keep the snapshots around at least the length of your scheduled maintenance window.  It’s not a get-out-of-jail card, but if you have the weekend or off-hours to experiment, it offers you a good tool to do so.

This has also helped me in the decision of taking a very conservative approach to implementing the new VMXNet3 NIC driver to existing VMs.  I might simply update my templates and only deploy them on new systems, or systems that don’t run services that rely on the NIC’s GUID.

One final note.  “GUID” can be many things depending on the context, and may be referenced in different ways (UUID, SID, etc).  Not all GUID’s are NIC GUID’s.  The term can be used quite loosely in various subject matters.   What does this mean to you?  It means that it makes searching the net pretty painful at times.

Interesting links:

A simple powershell script to get the NIC GUID
http://pshscripts.blogspot.com/2008/07/get-guidps1.html

Resource allocation for Virtual Machines

Ever since I started transitioning our production systems to my ESX cluster, I’ve been fascinated how visible resource utilization has become.  Or to put it another way, how blind I was before.  I’ve also been interested to hear about the natural tendency of many Administrator’s to over allocate resources to their VM’s.  Why does it happen?  Who’s at fault?  From my humble perspective, it’s a party everyone has shown up to.

  • Developers & Technical writers
  • IT Administrators
  • Software Manufacturers
  • Politics

Developers & Technical Writers
Best practices and installation guides are usually written by Technical Writers for that Software Manufacturer.  They are provided information by whom else?  The Developers.  Someone on the Development team will take a look-see at their Task Manager or htop in Linux, maybe PerfMon if they have extra time.  They determine (with a less than thorough vetting process) what the requirement should be, and then pass it off to the Technical Writer.  Does the app really need two CPU’s or does that just indicate it’s capable of multithreading?  Or both?  …Or none of the above?  Developers are the group that seems to be most challenged at understanding the new paradigm of virtualization, yet are the one’s to decide what the requirements are.  Some barely know what it is, or dismiss it as nothing more than a cute toy that won’t work for their needs.  It’s pretty fun to show them otherwise, but frustrating to see their continued suspicions of the technology.

IT Administrators (yep, me included)
Take a look at any installation guide for your favorite (or least favorite) application or OS.  Resource minimums are still written for hardware based provisioning.   Most best practice guides outline memory and CPU requirements within the first few pages.  Going against recommendations on page 2 generally isn’t good IT karma.  It feels as counterintuitive as trying to breathe with your head under water.  Only through experience have I grown more comfortable with the practice.  It’s still tough though.

Software Manufacturers
Virtualization can be a sensitive matter to Software Manufactures.  Some would prefer that it doesn’t exist, and choose to document it and license it in that way.  Others will insist that resources are resources, and why would they ever recommend their server application can run with just 768MB of RAM and a single CPU core if there was even a remote possibility of it hurting performance.

Politics
Let’s face it.  How much is Microsoft going to dive into memory recommendations for an Exchange Server when their own virtualization solution does not support some of the advanced memory handling features that VMWare supports?  The answer is, they aren’t.  It’s too bad, because their products run so well in a virtual environment.  Politics can also come from within.  IT departments get coerced by management, project teams or departments, or are just worried about SLA’s of critical services.  They acquiesce to try to keep everyone happy.

What can be done about it.
Rehab for everyone involved.  Too ambitious?  Okay, let’s just try to improve Installation/Best Practices guides from the Software Manufactures.

  • Start with two or three sets of minimums for requirements.  Provisioning the application or OS on a physical machine, followed by provisioning on a VM accommodating a few different hypervisors.
  • Clearly state if the application is even capable of multithreading.  That would eliminate some confusion on whether you even want to consider two or more vCPU’s  on a VM.  I suspect many red faces would show up when software manufactures admit to their customers they haven’t designed their software to work with more than one core anyway.  But this simple step would help Administrators greatly.
  • For VM based installations, note the low threshold amount for RAM in which unnecessary amounts of disk paging will begin to occur.   While the desire is to allocate as little resources as needed, nobody wants disk thrashing to occur.
  • For physical servers, one may have a single server playing a dozen different roles.  Minimums sometimes assume this, and they will throw in a buffer to pad the minimums – just in case.  With a VM, it might be providing just a single role.  Acknowledge that this new approach exists, and adjust your requirements accordingly.

Wishful thinking perhaps, but it would be a start.  Imagine the uproar (and competition) that would occur if a software manufacturer actually spec’d a lower memory or CPU requirement when running under one hypervisor versus another?  …Now I’m really dreaming.

IT Administrators have some say in this too.  Simply put, the IT department is a service provider.  Your staff and the business model are your customers.  As a Virtualization Administrator, you have the ability to assert your expertise on provisioning systems to provide a service, and it work as efficiently as possible.  Let them define what the acceptance criteria for the need they have, and then you deal with how to make it happen.

Real World Numbers
There are many legitimate variables that make it difficult to give one size fits all recommendations on resource requirements.  This makes it difficult for those first starting out.  Rather than making suggestions, I decided I would just summarize some of my systems I have virtualized, and how the utilization rates are for a staff of about 50 people, 20 of them being Software Developers.  These are numbers pulled during business hours.  I do not want to imply that these are the best or most efficient settings.  In fact, many of them were  “first guess” settings that I plan on adjusting later.  They might offer you a point of reference for comparison, or help in your upcoming deployment.

Server/Function Avg % of RAM used Avg % of CPU used / occasional spike Comments
AD Domain (all roles) Controller, DNS, DHCP. 
Windows Server 2008 x64
1 vCPU, 2GB RAM
9% 2% / 15% So much for my DC’s working hard.  2GB is overkill for sure, and I will be adjusting all three of my DC’s RAM downward.  I thought the chattiness of DC’s was more of a burden than it really is.
Exchange 2007 Server (all roles)
Windows Server 2008 x64
1 vCPU, 2.5GB RAM
30% 50% / 80% Consistently our most taxed VM, but pleasantly surprised by how well this runs.
Print Server, AV server 
Windows Server 2008 x64
1 vCPU, 2GB RAM
18% 3% / 10% Sitting as a separate server only because I hate having application servers running as print servers.
Source Code Control Database Server
Windows Server 2003 x64
1 vCPU, 1GB RAM
14% 2% / 40% There were fears from our Dev Team that this was going to be inferior to our physical server, and they suggested the idea of assigning 2 vCPU’s “just in case.”  I said no.  They reported a 25% performance improvement compared to the physical server.  Eventually they might figure out the ol’ IT guy knows what he’s doing.
File Server
Windows Server 2008 x64
1 vCPU, 2GB RAM
8% 4% / 20% Low impact as expected.  Definitely a candidate to reduce resources.
Sharepoint Front End Server
Windows Server 2008 x64
1 vCPU, 2.5GB RAM
10% 15% / 30% Built up, but not fully deployed to everyone in the organization.
Sharepoint Back End/SQL Server
Windows Server 2008 x64
1 vCPU, 2.5GB RAM
9% 15% / 50% I will be keeping a close eye on this when it ramps up to full production use.  SharePoint farms are known to be hogs.  I’ll find out soon enough.
SQL Server for project tracking.
Windows Server 2003 x64
1 vCPU, 1.5GB RAM
12% 4% / 50% Lower than I would have thought.
Code compiling system
Windows XP x64
1 vCPU 1GB RAM
35% 5% / 100% Will spike to 100% CPU usage during compiling (20 min.).  Compilers allow for telling it how many cores to use.
Code compiling system
Ubuntu 8.10 LTS x64
1 vCPU 1GB RAM
35% 5% / 100% All Linux distros seem to naturally prepopulate more RAM than their Windows counterparts, at the benefit perhaps of doing less paging.
       

To complicate matters a bit, you might observe different behaviors on some OS’s (XP versus Vista/2008 versus Windows 7/2008R2, or SQL 2005 versus SQL 2008) in their willingness to pre populate RAM.  Give SQL 2008 4GB of RAM, and it will want to use it even if it isn’t doing much.   You might notice this when looking at relatively idle VM’s with different OS’s, where some have a smaller memory footprint than others.   At the time of this writing, none of my systems were running Windows 2008 R2, as it wasn’t supported on ESX 3.5 as I was deploying them.

Some of these numbers are a testament to ESX’s/vSphere’s superior memory management handling and CPU scheduling.  Memory ballooning, swapping, Transparent Page Sharing all contribute to pretty startling efficiency.

I have yet to virtualize my CRM, web, mail relay, and miscellaneous servers, so I do not have any good data yet for these types of systems.  Having just upgraded to vSphere in the last few days, this also clears the way for me to assign multiple vCPU’s to the code compiling machines (as many as 20 VM’s).  The compilers have switches that can toggle exactly how many cores end up being used, and our Development Team needs these builds compiled as fast as possible.  That will be a topic for another post.

Lessons Learned
I love having systems isolated to performing their intended function now.  Who wants Peachtree crashing their Email server anyway?  Those administrators working in the trenches know that a server that is serving up a single role is easy to manage, way more stable, and doesn’t cross contaminate other services.  In a virtual environment, it’s  worth any additional costs in OS licensing or overhead.

When the transition to virtualizing our infrastructure began, I thought our needs, and our circumstances would be different than they’ve proven to be.  Others claim extraordinary consolidation ratios with virtualization.  I believed we’d see huge improvements, but those numbers wouldn’t possibly apply to us, because (chests puffed out) we needed real power.  Well, I was wrong, and so far, we really are like everyone else.

Helpful links

Follow

Get every new post delivered to your Inbox.

Join 737 other followers