Virtualization. Making it happen

 

It’s difficult to put into words how exciting, and how overwhelming the idea of moving to a virtualized infrastructure was for me.  In 12 months, I went from investigating solutions, to presenting our options to our senior management, onto the procurement process, followed by the design and implementation of the systems.  And finally, making the transition of our physical machines to a virtualized environment.

It has been an incredible amount of work, but equally as satisfying.  The pressure to produce results was even bigger than the investment itself.  With this particular project, I’ve taken away a few lessons I learned along the way, some of which had nothing to do with virtualization.  Rather than providing endless technical details on this post, I thought I’d share what I learned that has nothing to do with vswitches or CPU Utilization.

1.  The sell.  I never would have been able to achieve what I achieved without the  support of our Management Team.  I’m an IT guy, and do not have a gift of crafty PowerPoint slides, and fluid presentation skills.  But there was one slide that hit it out of the park for me.   It showed how much this crazy idea was going to cost, but more importantly, how it compared against what we were going to spend anyway under a traditional environment.  We had delayed server refreshing for a few years, and it was catching up to us.  Without even factoring in the projected growth of the company, the two lines intersected in less than one year.  I’m sure the dozen other slides helped support my proposal, but this one offered the clarity needed to get approval.

2.  Let go.  I tend to be self-reliant  and made a habit of leaning on my own skills to get things done.  At a smaller company, you get used to that.  Time simply didn’t allow for that approach to be taken on this project.  I needed help, and fast.  I felt very fortunate to establish a great working relationship with Mosaic Technologies.  They provided resources to me that gave me the knowledge I needed to make good purchasing decisions, then assisted with the high level design.  I had access to a few of the most knowledgeable folks in the industry to help me move forward on the project, minimizing the floundering on my part.  They also helped me with sorting out what could be done, versus real-world recommendations on deployment practices.  It didn’t excuse me from the learning that needed to occur, and making it happen, but rather, helped speed up the process, and apply a virtualization solution to our environment correctly.  There is no way I would have been able to do it in the time frame required without them.

3.  Ditch the notebook.  Consider the way you assemble what you’re learning.  I’ve never needed to gather as much information on a project as this.  I hated not knowing what I didn’t know. (take that Yogi Berra)  I was pouring through books, white papers, and blogs to give myself a crash course on a number of different subjects – all at the same time because they needed to work together.  Because of the enormity of the project, I decided from the outset that I needed to try something different.  This was the first project where I abandoned scratchpads and binders, highlighters (mostly) and printouts.  I documented ALL of my information in Microsoft OneNote.  This was a huge success, which I will describe more in another post.

4.  Tune into RSS feeds.  Virtualization was a great example of a topic that many smart people dedicate their entire focus towards, then are kind enough to post this information on their blogs.  Having feeds come right to your browser is the most efficient way to keep up on the content.  Every day I’d see my listing of feeds for a few dozen or so VMware related blogs I was keeping track of.  It was uncanny how timely, and how applicable some of the information posted was.  Not every bit of information could be unconditionally trusted, but hey, it’s the Internet.

5.  Understand the architecture.  Looking back, I spent an inordinate amount of time in the design phase.  Much of this was trying to fully understand what was being recommended to me by my resources at Mosaic, as well as other material, and how that compared to other environments.  At times, grass grew faster than I was moving on the project at the time, (exacerbated by other projects getting in the way) but I don’t regret my stubbornness to understand what was I was trying to absorb before moving forward.  We now have a scalable, robust system that helps avoid some of the common mistakes I see occur on user forums.

6.  Don’t be a renegade.  Learn from those who really know what they are doing, and choose proven technologies, while recognizing trends in the fast-moving virtualization industry.  For me there was a higher up front cost to this approach, but time didn’t allow for any experimentation.  It helped me settle on VMware ESX powered by Dell blades, running on a Dell/EqualLogic iSCSI SAN.  That is not a suggestion that a different, or lesser configuration will not work, but for me, it helped expedite my deployment.

7.  Just because you are a small shop, doesn’t mean you don’t have to think big.  Much of my design considerations surrounded planning for the future.  How the system could scale and change, and how to minimize the headaches with those changes.  I wanted my VLAN’s arranged logically, and address boundaries configured in a way that would make sense for growth.  For a company of about 50 employees/120 systems, I never had to deal with this very much.  Thanks to another good friend of mine whom I’d been corresponding with on a project a few months prior, I was able to get things started on the right foot.  I’ll tell you more about this in a later post.

The results of the project have exceeded my expectations.  It’s working even better than I anticipated, and has already proven it’s value when I had a hardware failure occur.  We’ve migrated over 20 of our production systems to the new environment, and will have about 20 more up online within about 6 months.  There is a tremendous amount of work yet to be completed, but the benefits are paying for themselves already.

It’s all about the name

Every once in a while you run into a way of doing things that makes you wonder why you ever did it any other way.  For me, that was using DNS aliasing for referencing all servers, and services that they provide.  I use them whenever possible.

Many years ago I had a catastrophic server failure.  Looking back, it was a fascinating series of events that you would think would never happen, but it did.  This server happened to be the primary storage server for our development team, and was a staple of our development system.  It’s full server name was hardcoded on mount points and symbolic links of other *nix systems, as well as drive mappings from windows machines connecting to it via Samba.  It’s name was buried in countless scripts owned by the Development and QA teams.  Once the new hardware came in, provisioning a new server was relatively easy.  Getting everything functioning again because of these broken links was not.  Other factors prevented me from using the old approach, which was naming the new server the same name as the old server.  So I knew there had to be a better way.  There was.  That was using DNS aliasing (cname records) on your internal DNS servers to decouple the server name itself from the service it was providing.  This practice helps you design your server infrastructure for change.

Good candidates for aliasing are:

  • NTP/time servers (automated for domain joined machines, but not for non-joined machines, *nix systems, and network devices)
  • Email servers (primary email servers, as well as mail relay servers)
  • Source code control servers
  • Document management, wikis, or collaboration servers
  • Critical workstations/servers that perform source code compiling and/or validation testing.
  • Network devices and OOB management cards.  I can’t remember what the FQDN’s of my switches are.  Can you?
  • Log servers.
  • File Servers and their respective share names or NFS exports (ex. \\infostore\sales & infostore:/exports/sales respectively)

The practice is particularly interesting on file servers.  If you start out with one file server that contains shares for your applications, your files, and your user home directories.  You could have sharenames that reference aliases, all for the very same server.

  • \\appserv\applications
  • \\fileserv\operations
  • \\userserv\joesmith

Now, when you need to move user home directories over to a new server, or bring up a new server to perform that new role, just move the data, turn up the share name, and change the alias.

Now of course, there are some things that aliasing can’t be used on, or doesn’t work well on.

  • DNS clients that need to refer to DNS servers require IP addresses, and can’t use aliases
  • Some windows service that may use complex authentication methods.
  • Services that are relying on SSL certificates that are expecting to see the real name, not the alias.  (ex.  Exchange URL references)
  • Windows Server 2003 and earlier do not support aliases out of the box.  It will support only \\realservername\sharename by default.  You will need to add a registry key to disable strict name checking.  More info found here:  http://support.microsoft.com/kb/281308 

Most recently, I made the transition from Exchange 2003 to Exchange 2007.  Usually a project like that has pages of carefully planned out steps on the cut-over;  what needed to be changed, and when.  What I didn’t have to worry about this time is all of my internal hosts that reference the mail server by it’s DNS alias name;  mailserver.mycompany.lan.  Just one easy step to change the cname reference from the old server name to the new server name, and that was it.  The same thing occurred when I transitioned to new Domain Controllers a few months ago.  These serve as my internal time servers for all internal systems and devices.

What’s most surprising is that this practices is not done in IT environments as often as you’d think.  There might be an occasional alias here and there, but not a calculated effort to help transitions to new servers and reduce downtime.  Whether you are doing planned server transitions, or recovering from a server failure, this is a practice that is guaranteed to help almost any situation.

An introduction of sorts…

There are a thousands of great blogs out there, with extremely smart people contributing all sorts of great information. This may not be one of them. Let me explain.

Every IT Administrator that is a staff of one or two knows that your strength isn’t in knowing every nuance of one particular thing, but rather, the latitude of knowledge needed to make everything work together. The dramatic shift of gears that has to occur in my job on a daily basis is not unique, but no less surprising. From deploying a new Virtualized Infrastructure one day, to figuring out why some SQL buried in our CRM doesn’t work on the next, to getting all of our *nix systems to play with our Windows Systems nicely. It never ends. I used to think that lack of absolute expertise in one specific thing was a hinderence. Now I see it as a strength.

I’ve gotten to stand on the shoulders of many, and would be foolish to think I’ve been able to accomplish everything I have on my own. This includes mentors, colleagues, solution providers, Management teams who trusted my opinion, and those unsung heros who figured out some registry entry that needed to be changed, and chose to write about it, so that I could get some sleep.

So this is to all of those men and women who have the capability setting up multiple VLAN’s, but are finding themselves fixing the photocopier because… well, nobody else can.