My VMworld “Call for Papers” submission, and getting more involved

It is a good sign that you are in the right business when you get tremendous satisfaction from your career – whether it be from the daily challenges at work, or through professional growth, learning, or sharing.  It’s been an exciting month for me, as I’ve taken a few steps to get more involved.

First, I decided to submit my application for the 2013 VMware vExpert program.  I’ve sat on the sidelines, churning out blog posts for 4 years now, but with the encouragement of a few of my fellow VMUG comrades and friends, decided to put my hat in the game with others equally as enthusiastic as I am about what many of us do for a living.  The list has not been announced yet, so we’ll see what happens.  I’m also now officially part of the Seattle VMUG steering committee, contributing where I can to provide more value to the local VMUG community.

Next, I was honored to be recognized as a 2013 Dell TechCenter Rockstar.  Started in 2012, the DTC Rockstar program recognizes those Subject Matter Experts and enthusiasts who share their knowledge on the portfolio of Dell solutions in the Enterprise.  And I am flattered to be in great company with the others who have been recognized by their efforts.   Congratulations to the others who were recognized as well. 

And finally, I took a stab at submitting an abstract for consideration as a possible session at this year’s VMworld.  I can’t say I ever imagined a scenario in which I would be responding to VMware’s annual “Call for Papers”, but with real-life use cases comes really interesting stories. I had a really interesting story.  My session title is:

4370 – Compiling code in virtual machines: Identifying bottlenecks and optimizing performance to scale out development environments

image

This session was inspired from part 1 and part 2 of “Vroom! Scaling up Virtual Machines in vSphere to meet performance requirements.”  What transpired from the project was a fascinating exercise in assumptions, bottleneck chasing, and a modern virtualized infrastructure’s ability to scale up computational power immediately for an organization.  I’ve received great feedback from those posts, but the posts just skimmed the surface on what was learned. What better way to demonstrate a very unique use-case than to share the details with those who really care.  Take a look out at:  http://www.vmworld.com/cfp.jspa.  My submission is under the “Customer Case Studies” track, number 4730.  Public voting is now open.  If you don’t have a VMworld account, just create one – it’s free.  Click on the session to read the abstract, and if you like what you see, click on the “thumbs up” button to put in a vote for it.

Spend enough time in IT, and it turns out you might have an opinion or two on things.  How to make it all work, and how to keep your sanity.  I haven’t quite figured out the definitive answers to either one of those yet, but when there is an opportunity to contribute, I try my best to pay it forward to the great communities of geeks out there.  Thanks for reading.

Configuring a VM for SNMP monitoring using Cacti

There are a number of things that I don’t miss with old physical infrastructures.  One near the top of the list is a general lack of visibility for each and every system.  Horribly underutilized hardware running happily along side overtaxed or misconfigured systems, and it all looked the same.  Fortunately, virtualization has changed much of that nonsense, and performance trending data of VMs and hosts are a given.

Partners in the VMware ecosystem are able to take advantage of the extensibility by offering useful tools to improve management and monitoring of other components throughout the stack.  The Dell Management Plug-in for VMware vCenter is a great example of that. It does a good job of integrating side-band management and event driven alerting inside of vCenter.  However, in many cases you still need to look at performance trending data of devices that may not inherently have that ability on it’s own.  Switchgear is a great example of a resource that can be left in the dark.  SNMP can be used to monitor switchgear and other types of devices, but it’s use is almost always absent in smaller environments.  But there are simple options to help provide better visibility even for the smallest of shops.  This post will provide what you need to know to get started.

In this example, I will be setting up a general purpose SNMP management system running Cacti to monitor the performance of some Dell PowerConnect switchgear.  Cacti leverages RRDTool’s framework to deliver time based performance monitoring and graphing.  It can monitor a number of different types of systems supporting SNMP, but switchgear provides the best example that most everyone can relate to.  At a very affordable price (free), Cacti will work just fine in helping with these visibility gaps.  

Monitoring VM
The first thing to do is to build a simple Linux VM for the purpose of SNMP management.  One would think there would be a free Virtual Appliance out on the VMware Virtual Appliance Marektplace for this purpose, but if there is, I couldn’t find it.  Any distribution will work, but my instructions will cater toward the Debian distributions – particularly Ubuntu, or a Ubuntu clone like Linux Mint (my personal favorite).  Set it for 1vCPU and 512 MB of RAM.  Assign it a static address on your network management VLAN (if you have one).  Otherwise, your production LAN will be fine.  While it is a single purpose built VM, you still have to live with it, so no need to punish yourself by leaving it bare bones.  Go ahead and install the typical packages (e.g. vim, ssh, ntp, etc.) for convenience or functionality.

Templates are an option that extend the functionality in Cacti.  In the case of the PowerConnect switches, the template will assist in providing information on CPU, memory, and temperature.  A template for the PowerConnect 6200 line of switches can be found here.  The instructions below will include how to install this.

Prepping SNMP on the switchgear

In the simplest of configurations (which I will show here), there really isn’t much to SNMP.  For this scenario, one will be providing read-only access of SNMP via a shared community name. The monitoring VM will poll these devices and update the database accordingly.

If your switchgear is isolated, as your SAN switchgear might be, then there are a few options to make the switches visible in the right way. Regardless of what option you use, the key is to make sure that your iSCSI storage traffic lives on a different VLAN from your management interface of the device.  I outline a good way to do this at “Reworking my PowerConnect 6200 switches for my iSCSI SAN

There are a couple of options in connecting the isolated storage switches to gather SNMP data: 

Option 1:  Connect a dedicated management port on your SAN switch stack back to your LAN switch stack.

Option 2:  Expose the SAN switch management VLAN using a port group on your iSCSI vSwitch. 

I prefer option 1, but regardless, if it is iSCSI switches you are dealing with, you will want to make sure that management traffic is on a different VLAN than your iSCSI traffic to maintain the proper isolation of iSCSI traffic. 

Once the communication is in place, just make a few changes to your PowerConnect switchgear.  Note that community names are case sensitive, so decide on a name, and stick with it.

enable

configure

snmp-server location "Headquarters"

snmp-server contact "IT"

snmp-server community mycompany ro ipaddress 192.168.10.12

Monitoring VM – Pre Cacti configuration
Perform the following steps on the VM you will be using to install Cacti.

1.  Install and configure SNMPD

apt-get update

mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.old

2.  Create a new /etc/snmp/snmpd.conf with the following contents:

rocommunity mycompanyt

syslocation Headquarters

syscontact IT

3.  Edit /etc/default/snmpd to allow snmpd to listen on all interfaces and use the config file.  Comment out the first line below and replace it with the second line:

SNMPDOPTS=’-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1′

SNMPDOPTS=’-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf’

4.  Restart the snmpd daemon.

sudo /etc/init.d/snmpd restart

5.  Install additional perl packages:

apt-get install libsnmp-perl

apt-get install libnet-snmp-perl

Monitoring VM – Cacti Installation
6.  Perform the following steps on the VM you will be using to install Cacti.

apt-get update

apt-get install cacti

During the installation process, MySQL will be installed, and the installation will ask what you would like the MySQL root password to be. Then the installer will ask what you would like cacti’s MySQL password to be.  Choose passwords as desired.

Now, the Cacti installation is available via http://[cactiservername]/cacti with a username and password of "admin" Cacti will now ask you to change the admin password.  Choose whatever you wish.

7.  Download PowerConnect add-on from http://docs.cacti.net/usertemplate:host:dell:powerconnect:62xx and unpack both zip files

8.  Import the host template via the GUI interface.  Log into Cacti, and go to Console > Import Templates, select the desired file (in this case, cacti_host_template_dell_powerconnect_62xx_switch.xml), and click Import.

9.  Copy the 62xx_cpu.pl script into the Cacti script directory on server (/usr/share/cacti/site/scripts).  This may need executable permissions.  If you downloaded it to a Windows machine, but need to copy it to the Linux VM, WinSCP works nicely for this.

10.  Depending on how things were copied, there might be some line endings in the .pl file.  You can clean up that 62xx_cpu.pl file by running the following:

dos2unix 62xx_cpu.pl

Using Cacti
You are now ready to run Cacti so that you can connect and monitor your devices. This example shows how to add the device to Cacti, then monitor CPU and a specific data port on the switch.

1.  Launch Cacti from your workstation by browsing out to http://[cactiservername]/cacti  and enter your credentials.

2.  Create a new Graph Tree via Console > Graph Trees > Add.  You can call it something like “Switches” then click Create.

3.  Create a new device via Console > Devices > Add.  Give it a friendly description, and the host name of the device.  Enter the SNMP Community name you decided upon earlier.  In my example above, I show the community name as being “mycompany” but choose whatever fits.  Remember that community names are case sensitive.

4.  To create a graph for monitoring CPU of the switch, click Console > Create New Graphs.  In the host box, select the device you just added.   In the “Create” box, select “Dell Powerconnect 62xx – CPU” and click Create to complete.

5.  To create a graph for monitoring a specific Ethernet port, click Console > Create New Graphs.  In the Host box, select the device you just added.  Put a check mark next to the port number desired, and select In/Out bits with total bandwidth.  Click Create > Create to complete. 

6.  To add the chart to the proper graph tree, click Console > Graph Management.  Put a check mark next to the Graphs desired, and change the “Choose and action” box to “Place on a Tree [Tree name]

Now when you click on Graphs, you will see your two items to be monitored

image

By clicking on the magnifying glass icon, or by the “Graph Filters” near the top of the screen, one can easily zoom or zoom out to various sampling periods to suite your needs.

Conclusion
Using SNMP and a tool like Cacti can provide historical performance data for non virtualized devices and systems in ways you’ve grown accustomed to in vSphere environments.  How hard are your switches running?  How much internet bandwidth does your organization use?  This will tell you.  Give it a try.  You might be surprised at what you find.

Vroom! Scaling up Virtual Machines in vSphere to meet performance requirements–Part 2

In my original post, Scaling up Virtual Machines in vSphere to meet performance requirements, I described a unique need for the Software Development Team to have a lot of horsepower to improve the speed of their already virtualized code compiling systems.  My plan of attack was simple.  Address the CPU bound systems with more powerful blades, and scale up the VMs accordingly.  Budget constraints axed the storage array included in my proposal, and also kept this effort limited to keeping the same number of vSphere hosts for the task. 

The four new Dell M620 blades arrived and were quickly built up with vSphere 5.0 U2 (Enterprise Plus Licensing) with the EqualLogic MEM installed.  A separate cluster was created to insure all build systems were kept separate, and so that I didn’t have to mask any CPU features to make them work with previous generation blades.  Next up was to make sure each build VM was running VM hardware level 8.  Prior to vSphere 5, the guest VM was unaware of the NUMA architecture behind it.  Without the guest OS understanding memory locality, one could introduce problems into otherwise efficient processes.  While I could find no evidence that the compilers for either OS are NUMA aware, I knew the Operating Systems understood NUMA.

Each build VM has a separate vmdk for its compiling activities.  Their D:\ drive (or /home for Linux) is where the local sandboxes live.  I typically have this second drive on a “Virtual Device Node” changed to something other than 0:x.  This has proven beneficial in previous performance optimization efforts.

I figured the testing would be somewhat trivial, and would be wrapped up in a few days.  After all, the blades were purchased to quickly deliver CPU power for a production environment, and I didn’t want to hold that up.  But the data the tests returned had some interesting surprises.  It is not every day that you get to test 16vCPU VMs for a production environment that can actually use the power.  My home lab certainly doesn’t allow me to do this, so I wanted to make this count. 

Testing
The baseline tests would be to run code compiling on two of the production build systems (one Linux, and the other Windows) on an old blade, then the same set with the same source code on the new blades.  This would help in better understanding if there were speed improvements from the newer generation chips.  Most of the existing build VMs are similar in their configuration.  The two test VMs will start out with 4vCPUs and 4GB of RAM.  Once the baselines were established, the virtual resources of each VM would be dialed up to see how they respond.  The systems will be compiling the very same source code.

For the tests, I isolated each blade so they were not serving up other needs.  The test VMs resided in an isolated datastore, but lived on a group of EqualLogic arrays that were part of the production environment.  Tests were run at all times during the day and night to simulate real world scenarios, as well as demonstrate any variability in SAN performance.

Build times would be officially recorded in the Developers Build Dashboard.  All resources would be observed in vSphere in real time, with screen captures made of things like CPU, disk and memory, and dumped into my favorite brain-dump application; Microsoft OneNote.  I decided to do this on a whim when I began testing, but it immediately proved incredibly valuable later on as I found myself looking at dozens of screen captures constantly.

The one thing I didn’t have  time to test was the nearly limitless possible scenarios in which multiple monster VMs were contending for CPUs at the same time.  But the primary interest for now was to see how the build systems scaled.  I would then make my sizing judgments off of the results, and off of previous experience with smaller build VMs on smaller hosts. 

The [n/n] title of each test result column indicates the number of vCPUs followed by the amount of vRAM associated.  Stacked bar graphs show a lighter color at the top of each bar.  This indicates the difference in time between the best result and the worst result.  The biggest factor of course would be the SAN.

Bottleneck cat and mouse
Performance testing is a great exercise for anyone, because it helps challenge your own assumptions on where the bottleneck really is.  No resource lives as an island, and this project showcased that perfectly.  Improving the performance of these CPU bound systems may very well shift the contention elsewhere.  However, it may expose other bottlenecks that you were not aware of, as resources are just one element of bottleneck chasing.  Applications and the Operating Systems they run on are not perfect, nor are the scripts that kick them off.  Keep this in mind when looking at the results.

Test Results – Windows
The following are test results are with Windows 7, running the Visual Studio Compiler.  Showing three generations of blades.  The Dell M600 (HarperTown), M610, (Nehalem), and M620 (SandyBridge). 

Comparing a Windows code compile across blades without any virtual resource modifications.

image

Yes, that is right.  The old M600 blades were that terrible when it came to running VMs that were compiling.  This would explain the inconsistent build time results we had seen in the past.  While there was improvement in the M620 over the M610s, the real power of the M620s is that they have double the number of physical cores (16) than the previous generations.  Also noteworthy is the significant impact the SAN (up to 50%) was affecting the end result. 

Comparing a Windows code compile on new blade, but scaling up virtual resources

image

Several interesting observations about this image (above). 

  • When the SAN can’t keep up, it can easily give back the improvements made in raw compute power.
  • Performance degraded when compiling with more than 8vCPUs.  It was so bad that I quit running tests when it became clear they weren’t compiling efficiently (which is why you do not see SAN variability when I started getting negative returns)
  • Doubling the vCPUs from 4 to 8, and the vRAM from 4 to 8 only improved the build time by about 30%, even though the compile showed nearly perfect multithreading (shown below) and 100% CPU usage.  Why the degradation?  Keep reading!

image

    On a different note, it was becoming quite clear already I needed to take a little corrective action in my testing.  The SAN was being overworked at all times of the day, and it was impacting my ability to get accurate test results in raw compute power.  The more samples I ran the more consistent the inconsistency was.  Each of the M620’s had a 100GB SSD, so I decided to run the D:\ drive (where the build sandbox lives) on there to see a lack of storage contention impacted times.  The purple line indicates the build times of the given configuration, but with the D:\ drive of the VM living on the local SSD drive.

image

The difference between a slow run on the SAN and a run with faster storage was spreading.

Test Results – Linux
The following are test results are with Linux, running the GCC compiler. Showing three generations of blades.  The Dell M600 (HarperTown), M610, (Nehalem), and M620 (SandyBridge).

Comparing a Linux code compile across blades without any virtual resource modifications.

image

The Linux compiler showed a a much more linear improvement, along with being faster than it’s Windows counterpart.  Noticeable improvements across the newer generations of blades, with no modifications in virtual resources.  However, the margin of variability from the SAN is a concern.

Comparing a Linux code compile on new blade, but scaling up virtual resources

image

At first glance it looks as if the Linux GCC compiler scales up well, but not in a linear way.  But take a look at the next graph, where similar to the experiment with the Windows VM, I changed the location of the vmdk file used for the /home drive (where the build sandbox lives) over to the local SSD drive.

image

This shows very linear scalability with Linux and a GCC compiler.  A 4vCPU with 4GB RAM was able to compile 2.2x faster with 8vCPUs and 8GB of RAM.  Total build time was just 12 minutes.  Triple the virtual resources to 12/12, and it is an almost linear 2.9x faster than the original configuration.  Bump it up to 16vCPUs, and diminishing returns begin to show up, where it is 3.4x faster than the original configuration.  I suspect crossing NUMA nodes and the architecture of the code itself was impacting this a bit.  Although, don’t lose sight of the fact that a  build that could take up to 45 minutes on the old configuration took only 7 minutes with 16vCPUs.

The big takeaways from these results are the differences in scalability in compilers, and how overtaxed the storage is.  Lets take a look at each one of these.

The compilers
Internally it had long been known that Linux compiled the same code faster than Windows.  Way faster.  But for various reasons it had been difficult to pinpoint why.  The data returned made it obvious.  It was the compiler.

image

While it was clear that the real separation in multithreaded compiling occurred after 8vCPUs, the real problem with the Windows Visual Studio compiler begins after 4vCPUs.  This surprised me a bit because when monitoring the vCPU usage (in stacked graph format) in vCenter, it was using every CPU cycle given to it, and multithreading quite evenly.  The testing used Visual Studio 2008, but I also tested newer versions of Visual Studio, with nearly the same results. 

Storage
The original proposal included storage to support the additional compute horsepower.  The existing set of arrays had served our needs very well, but were really targeted at general purpose I/O needs with a focus of capacity in mind.  During the budget review process, I had received many questions as to why we needed a storage array.  Boiling it down to even the simplest of terms didn’t allow for that line item to survive the last round of cuts.  Sure, there was a price to pay for the array, but the results show there is a price to pay for not buying the array.

I knew storage was going to be an issue, but when contention occurs, its hard to determine how much of an impact it will have.  Think of a busy freeway, where throughput is pretty easy to predict up to a certain threshold.  Hit critical mass, and predicting commute times becomes very difficult.  Same thing with storage.  But how did I know storage was going to be an issue?  The free tool provided to all Dell EqualLogic customers; SAN HQ.  This tool has been a trusted resource for me in the past, and removes ALL speculation when it comes to historical usage of the arrays, and other valuable statistics.  IOPS, read/write ratios, latency etc.  You name it. 

Historical data of Estimated Workload over the period of 1 month

image

Historical data of Estimated Workload over the period of 12 months

image

Both images show that with the exception of weekends, the SAN arrays are maxed out to 100% of their estimated workload.  The overtaxing shows up on the lower part of each screen capture the read and writes surpassing the brown line indicating the estimated maximum IOPS of the array.  The 12 month history showed that our storage performance needs were trending upward.

Storage contention and how it relates to used CPU cycles is also worth noting.  Look at how inadequate storage I/O influences compute. The image below shows the CPU utilization for one of the Linux builds using 8vCPUs and 8GB RAM when the /home drive was using fast storage (the local SSD on the vSphere host)

image

Now look at the same build when running  against a busy SAN array.  It completely changes the CPU usage profile, and thus took 46% longer to complete.

image

General Observations and lessons

  • If you are running any hosts using pre-Nehalem architectures, now is a good time to question why. They may not be worth wasting vSphere licensing on. The core count and architectural improvements on the newer chips put the nails in the coffin on these older chips.
  • Storage Storage Storage. If you have CPU intensive operations, deal with the CPU, but don’t neglect storage. The test results above demonstrate how one can easily give back the entire amount of performance gains in CPU by not having storage performance to support it.
  • Giving a Windows code compiling VM a lot of CPU, but not increasing the RAM seemed to make the compiler trip on it’s own toes.  This makes sense, as more CPUs need more memory addresses to work with. 
  • The testing showcased another element of virtualization that I love. It often helps you understand problems that you might otherwise be blind to. After establishing baseline testing, I noticed some of the Linux build systems were not multithreading the way they should. Turns out it was some scripting errors by our Developers. Easily corrected.

Conclusion
The new Dell M620 blades provided an immediate performance return.  All of the build VMs have been scaled up to 8vCPUs and 8GB of RAM to get the best return while providing good scalability of the cluster.  Even with that modest doubling of virtual resources, we now have nearly 30 build VMs that when storage performance is no longer an issue, will run between 4 and 4.3 times faster than the same VMs on the old M600 blades.  The primary objective moving forward is to target storage that will adequately support these build VMs, as well as looking into ways to improve multithreaded code compiling in Windows.

Helpful Links
Kitware blog post on multithreaded code compiling options

http://www.kitware.com/blog/home/post/434

Using a Synology NAS as an emergency backup DNS server for vSphere

Powering up a highly virtualized infrastructure can sometimes be an interesting experience.  Interesting in that “crossing-the-fingers” sort of way.  Maybe it’s an outdated run book, or an automated power-on of particular VMs that didn’t occur as planned.  Sometimes it is nothing more than a lack of patience between each power-on/initialization step.  Whatever the case, if it is a production environment, there is at least a modest amount of anxiety that goes along with this task.  How often does this even happen?  For those who have extended power outages, far too often.

One element that can affect power-up scenarios is availability of DNS.  A funny thing happens though when everything is virtualized.  Equipment that powers the infrastructure may need DNS, but DNS is inside of the infrastructure that needs to be powered up.  A simple way around this circular referencing problem is to have another backup DNS server that supplements your normal DNS infrastructure.  This backup DNS server acts as a slave to the server having authoritative control for that DNS zone, and would handle at minimum recursive DNS queries for critical infrastructure equipment, and vSphere hosts.  While all production systems would use your normal primary and secondary DNS, this backup DNS server could be used as the secondary name server a few key components:

  • vSphere hosts
  • Server and enclosure Management for IPMI or similar side-band needs
  • Monitoring nodes
  • SAN components (optional)
  • Switchgear (optional)

vSphere certainly isn’t as picky as it once was when it comes to DNS.  Thank goodness.  But guaranteeing immediate availability of name resolution will help your environment during these planned, or unplanned power-up scenarios.  Those that do not have to deal with this often have at least one physical Domain Controller with integrated DNS in place.  That option is fine for many organizations, and certainly accomplishes more than just availability of name resolution.  AD design is a pretty big subject all by itself, and way beyond the scope of this post.  But running a spare physical AD server isn’t my favorite option for a number of different reasons, especially for smaller organizations.  Some folks way smarter than me might disagree with my position.  Here are a few reasons why it isn’t my preferred option.

  • One may be limited in Windows licensing
  • There might be limited availability of physical enterprise grade servers.
  • One may have no clue as to if, or how a physical AD server might fit into their DR strategy.

As time marches on, I also have a feeling that this approach will be falling out of favor anyway.  During a breakout session for optimizing virtualized AD infrastructures at the 2012 VMWorld, it was interesting to hear that the VMware Mothership still has some physical AD servers running the PDCe role.  However, they were actively in the process of eliminating this final, physical element, and building recommendations around doing so.  And lets face it, a physical DC doesn’t align with the vision of elastic, virtualized datacenters anyway.

To make DNS immediately available during these power-up scenarios, the prevailing method in the “Keep it Simple Stupid” category has been running a separate physical DNS server.  Either a Windows member server with a DNS role, or a Linux server with BIND.  But it is a physical server, and us virtualization nuts hate that sort of thing.  But wait!  …There is one more option.  Use your Synology NAS as an emergency backup DNS server.  The intention of this is not to supplant your normal DNS infrastructure. it’s simply to help a few critical pieces of equipment start up.

The latest version of Synology’s DSM (4.1) comes with a beta version of a DNS package.  It is pretty straight forward, but I will walk you through the steps of setting it up anyway.

1.  Verify that your Windows DNS servers allow to transfer to the IP address of the NAS.  Jump into the Windows Server DNS MMC snap in, highlight the zone you want to setup a zone transfer to, and click properties.  Add or verify that the settings allow a zone transfer to the new slave server

2.  In the Synology DSM, open the Package Center, and install DNS package.

3.  Enable Synology DSM Firewall to allow for DNS traffic.  In the Synology DSM, open the Control Panel > Firewall.  Highlight the interface desired, and click Create.  Choose “Select from a built in list of applications” and choose “DNS Server”  Save the rule, and exit out of the Firewall application.

4.  Open up “DNS Server” from the Synology launch menu.

image

5.  Click on “Zones” and click Create > Slave Zone.  Choose a “Forward Zone” type, and select the desired domain name, and Master DNS server

image

6.  Verify the population of recourse records by selecting the new zone, clicking Edit > Resource Records.

image

7.  If you want, or need to have this forward DNS requests, enable the forwarders checkbox. (In my Home Lab, I enable this.  In my production environment, I do not)

image

8.  Complete the configuration, and test with a client using this IP address only for DNS, simply to verify that it is operating correctly.  Then, go back and tighten up some of the security mechanisms as you see fit.  Once that is completed, jump back into your ESXi hosts (and any other equipment) and configure your secondary DNS to use this server.

image

In my case, I had my Synology NAS to try this out in my home lab, as well as newly deployed unit at work (serving the primary purpose of a Veeam backup target).  In both cases, it has worked exactly as expected, and allowed me to junk an old server at work running BIND.

If the NAS lived on an isolated storage network that wasn’t routable, then this option wouldn’t work, but if you have one living on a routable network somewhere, then it’s a great option.  The arrangement simplifies the number of components in the infrastructure while insuring service availability.

Even if you have multiple internal zones, you may want to have this slave server only handling your primary zone.  No need to make it more complicated than it needs to be.  You also may choose to set up the respective reverse lookup zone as a slave.  Possible, but not necessary for this purpose.

There you have it.  Nothing ground breaking, but a simple way to make a more resilient environment during power-up scenarios.

Helpful Links:

VMWorld 2012.  Virtualizing Active Directory Best Practices (APP-BCA1373).  (Accessible by VMWorld attendees only)
http://www.vmworld.com/community/sessions/2012/

Vroom! Scaling up Virtual Machines in vSphere to meet performance requirements

image

A typical conversation with one of our Developers goes like this.  “Hey, that new VM you gave us is great, but can you make it say, 10 times faster?”   Another day, and another request by our Development Team to make our build infrastructure faster.  What is a build infrastructure, and what does it have to do with vSphere?  I’ll tell you…

Software Developers have to compile, or “build” their source code before it is really usable by anyone. Compiling can involve just a small bit of code, or millions of lines. Developers will often perform builds on their own workstations, as well as designated “build” systems. These dedicated build systems are often part of a farm of systems that are churning out builds by fixed schedule, or on demand.  Each might be responsible for different products, platforms, versions, or build purposes.  This can result in dozens of build machines.  Most of this is orchestrated by a lot of scripting or build automation tools.  This type of practice is often referred to as Continuous Integration (CI), and are all driven off of Test Driven Development and Lean/Agile Development practices.

In the software world, waiting for builds is wasting money. Slower turn around time, and longer cycles leave less time or willingness to validate that changes to the code didn’t’ break anything.  So there is a constant desire to make all of this faster.

Not long after I started virtualizing our environment, I demonstrated the benefits of virtualizing our build systems. Often times the physical build systems were on tired old machines lacking uniformity, protection, revision control, or performance monitoring. That is not exactly a desired recipe for business critical systems. We have benefited in so many ways with these systems being virtualized. Whether it is cloning a system in just a couple of minutes, or knowing they replicated offsite without even thinking about it. 

But one problem. Code compiling takes CPU. Massive amounts of it. It has been my observation that nothing makes better use of parallelizing with multiple cores better than compilers.  Many applications simply aren’t able to multi-thread, while other applications can, but don’t do it very well – including well known enterprise application software.  Throw the right command line switch on a compiler, and it will peg out your latest rocket of a workstation.

Take a look below.  This is a 4vCPU VM.  That solid line pegged at 100% nearly the entire time is pretty much the way the system will run during the compile.  There are exceptions, as tasks like linking are single threaded.  What you see here can go on for hours at a time.

image

This is a different view of that same VM above, showing a nearly perfect distribution of threading across the vCPUs assigned to the VM.

image

So, as you can see, the efficiency of the compilers actually present a bit of a problem in the virtualized world.  Lets face it, one of the values virtualization provides is the unbelievable ability to use otherwise wasted CPU cycles for other systems that really need it.  But what happens if you really need it?  Well, consolidation ratios go down, and sizing becomes really important.

Compiling from source code can involve handling literally millions of little tiny files.  You might think there is a ton of disk activity.  There certainly can be I/O, but it is rarely disk bound.  This stuck out loud and clear after some of the Developer’s physical workstations had SSDs installed.  After an initial hiccup with some bad SSDs, further testing showed almost no speed improvement.  Looking at some of the performance data on those workstations showed that SSDs had no affect because the systems were always CPU bound.

Even with the above, some evidence suggests that the pool of Dell EqualLogic arrays (PS6100 and PS600) used in this environment were nearing their performance thresholds.  Ideally, I would like to incorporate the EqualLogic hybrid array.  The SSD/SAS combo would give me the IOPS needed if I started running into I/O issues.  Unfortunately, I have to plan for incorporating this into the mix perhaps a bit later in the year.

RAM for each build system is a bit more predictable.  Most systems are not memory hogs when compiling.  4 to 6 Gigabytes of RAM used during a build is quite typical.  Linux has a tendency to utilize it more if it has it available, especially when it comes to file IO.

The other variable is the compiler.  Windows platforms may use something like Visual Studio, while Linux will use a GCC compiler.  The differences in performance can be startling.  Compile the exact same source code on two machines with the exact same specs, with one running Windows/Visual Studio, and the other running Linux/GCC, and the Linux machine will finish the build in 1/3rd the time.  I can’t do anything about that, but it is a worthy data point when trying to speed up builds.

The Existing Arrangement
All of the build VMs (along with the rest of the VMs) currently run in a cluster of 7 Dell M6xx blades inside a Dell M1000e enclosure.  Four of them are Dell M600s with dual socket, Harper Town based chips.  Three others are Dell M610s running Nehalem chips.  The Harper Town chips didn’t support hyper threading, so in vSphere, that means it will see just a total of 8 logical cores.  The Nehalem based systems show 16 logical cores.

All of the build systems (25 as of right now, running a mix of Windows and Linux) run no greater than 4vCPUs.  I’ve held firm on this limit of going no greater than 50% of the total physical core count of a host.  I’ve gotten some heat from it, but I’ve been rewarded with very acceptable CPU Ready times.  After all, this cluster had to support the rest of our infrastructure as well.  By physical workstation standards (especially expensive Development workstations), they are pathetically slow.  Time to do something about it.

The Plan
The plan is simple.  Increase CPU resources.  For the cluster, I could either scale up (bigger hosts) or scale out (more hosts).  In my case, I was really limited on the capabilities on the host, plus, I wanted to refrain from buying more vSphere licenses unless I had to, so it was well worth it to replace the 4 oldest M600 blades (using Intel Harper Town chips).  The new blades, which will be Dell M620s, will have 192GB of RAM versus just 32GB in the old M600s.  And lastly, in order to take advantage of some of the new chip architectures in the new blades, I will be splitting this off into a dedicated 4 host cluster.

  New M620 Blades Old M600 Blades
Chip Intel Xeon E5-2680 Intel Xeon E5430
Clock Speed 2.7GHz (or faster) 2.66GHz
# of physical cores 16 8
# of logical cores 32 8
RAM 192 GB 32 GB

The new blades will have dual 8 core Sandy Bridge processors, giving me 16 physical cores, and 32 logical cores with hyper threading for each host. This is double the physical cores, and 4 times the logical cores against the older hosts. I will also be paying the premium price for clock speed. I rarely get the fastest clock speed of anything, but in this case, it can truly make a difference.

I have to resist throwing in the blades and just turning up the dials on the VMs.  I want to understand to what level I will be getting the greatest return.  I also want to see to what level does the dreaded CPU Ready value start cranking up.  I’m under no illusion that a given host only has so many CPU cycles, no matter how powerful it is.  But in this case, it might be worth tolerating some degree of contention if it means that the majority of time it finishes the builds some measurable amount faster.

So how powerful can I make these VMs?  Do I dare go past 8 vCPUs?  12 vCPUs?  How about 16?  Any guesses?  What about NUMA, and the negative impact that might occur if one goes beyond a NUMA node?  Stay tuned!  …I intend to find out. 

Saving time with the AutoLab for vSphere

Earlier this year, I started building up a few labs for a little work, and a little learning.  (Three Labs for three reasons).  Not too long after that is when I started playing around with the AutoLab.  For those not familiar with what the AutoLab is, it is most easily described as a crafty collection of scripts, open source VMs, and shell VMs that allow one to build a nested vSphere Lab environment with minimal effort.  The nested arrangement can live inside of VMware Workstation, Fusion, or ESXi.  AutoLab comes to you from a gentleman by the name of Alastair Cooke, along with support from many over at the vBrownBag sessions.  What?  You haven’t heard of the vBrownBag sessions either?  Even if you are just a mild enthusiast of VMware and virtualization, check out this great resource.  Week after week, they put out somewhat informal, but highly informative webinars.  The AutoLab and vBrownBag sessions are both great examples of paying it forward to an already strong virtualization community.

The Value of AutoLab
Why use it?  Simple.  It saves time.  Who doesn’t like that?  To be fair, it doesn’t really do anything that you couldn’t do on your own.  But here’s the key.  Scripts don’t forget to do stuff.  People do (me included).  Thus, the true appreciation of it really only comes after you have manually built up your own nested lab a few different times.  With the AutoLab, set up the requirements (well documented in the deployment guide), kick it off, and a few hours later your lab is complete.  There are tradeoffs of course with a fully nested environment, but it is an incredibly powerful arrangement, and thanks to the automation of it all, will allow you to standardize your deployment of a lab.  The AutoLab has made big improvements with each version, and now assists in incorporating vCloud Director, vShield, View, Veeam One, and Veeam Backup & Replication.

Letting the AutoLab take care of some of the menial things through automation is a nice reminder of the power of automation in general.  Development teams are quite familiar with this concept, where automation and unit testing allow them to be more aggressive in their coding to produce better results faster.  Fortunately, momentum seems to be gaining in IT around this, and VMware is doing its part as well with things like vCenter Orchestrator, PowerCLI, and stateless host configurations with AutoDeploy.

In my other post, I touch a bit on what I use my labs for.  Those who know me also know I’m a stickler for documentation.  I have high standards for documentation from software manufacturers, and from myself when providing it for others.  The lab really helps me provide detailed steps and accurate screen shots along the way.

The AutoLab nested arrangement I touch most is on my laptop.  Since my original post, I did bump up my laptop to 32GB of RAM.  Some might think this would be an outrageously expensive luxury, but a 16GB Kit for a Dell Precision M6600 laptop costs only $80 on NewEgg at the time of purchase (Don’t ask me why this is so affordable.  I have no idea.).  Regardless, don’t let this number prevent you from using the AutoLab.  The documentation demonstrates how to make the core of the lab run with just 8GB of RAM.

A few tips
Here are a few tips that make my experience a bit better with the AutoLab nested vSphere environment.  Nothing groundbreaking, but just a few things to make life easier.

  • I have a Shortcut to a folder that houses all of my shortcuts needed for the lab.  Items like the router Administration, the NAS appliance, and your local hosts file which you might need to edit on occasion.
  • I choose to create another VMnet network in VMware Workstation so that I could add a few more vNICs to my nested ESXi hosts. That allows me to create a vSwitch to be used to play with additional storage options (VSAs, etc.) while preserving what was already set up.
  • The FREESCO router VM is quite a gem.  It provides quite a bit of flexibility in a lab environment (a few adjustments and you can connect to another lab living elsewhere), and you might even find other uses for it outside of a lab.
  • To allow direct access to the FreeNAS storage share from your workstation, you will need to click on the “Connect a host virtual adapter to this network” option on the VMnet3 network in the Network Editor of VMware Workstation.
  • You might be tempted to trim up the RAM on various VMs to make everything fit.  Trim up the RAM too much on say, the vMA, and SSH won’t work.  Just something to be mindful of.
  • On my laptop, I have a 256GB Crucial M4 SSD drive for the OS, and a 750GB SATA disk for everything else.  I have a few of the VM’s (the virtualized ESXi hosts, vCenter server and a DC over on the SSD, and most everything else on the SATA drive.  This makes everything pretty fast while using my SSD space wisely.
  • You’ll be downloading a number of ISOs and packages.  Start out with a plan for organization so that you know where things are when/if you have to rebuild.
  • The ReadMe file inside of the packaged FreeNAS VM is key to understanding where and how to place the installation bits.  Read carefully.
  • The automated build of the DC and vCenter VMs can be pretty finicky on which Windows Server ISO it will work with.  If you are running into problems, you may not be using the correct ISO.
  • If you build up your lab on a laptop, and suddenly you can’t get anything to talk to say, the storage network, it may be the wireless (or wired) network you connected to.  I had this happen to me one where the wireless address range happened to be the same as part of my lab. 
  • As with any VMs you’ll be running on a Desktop/Laptop with VMware Workstation, make sure you create Antivirus real-time scanning exceptions for all locations that will be housing VMs.  The last thing you need is your Antivirus thinking its doing you a favor.
  • The laptop I use for my lab is also my primary system.  It’s worth a few bucks to protect it with a disk imaging solution.  I choose to dump the entire system out to an external drive using Acronis TrueImage.  I typically run this when all of the VMs are shut off.

So there you have it.  Get your lab set up, and hunker down with your favorite book, blog, links from twitter, or vBrownBag session, and see what you can learn.  Use it and abuse it.  It’s not a production environment, and is a great opportunity to improve your skills, and polish up your documentation.

– Pete

Helpful Links
AutoLab
http://www.labguides.com/autolab
Twitter: @DemitasseNZ #AutoLab https://twitter.com/DemitasseNZ

vBrownBag sessions.  A great resource, and an easy way to surround yourself (virtually) with smart people. http://professionalvmware.com/brownbags/
Twitter:  @cody_bunch @vBrownBag https://twitter.com/vBrownBag

FREESCO virtual router.  Included and preconfigured with the AutoLab, but worth looking at their site too.
http://freesco.org/

FreeNAS virtual storage.  Also included and preconfigured with the AutoLab.
http://www.freenas.org/