The vSphere Home Lab. For some, it is a tool for learning. For others it is a hobby. And for the rest of us, it is a weird addiction rationalized as one of the first two reasons. Home Labs come in all shapes and sizes, and there really is no right or wrong way to create one. Apparently interest in vSphere Home Labs hasn’t waned, as there are now countless resources available online illustrating various designs. At one of our recent Seattle VMUG meetings, we gave a presentation on Home Lab arrangements and ideas. There was great interaction from the audience, and we received several comments afterward on how much they enjoyed the discussion and learning about what others were doing. If you are a VMUG leader and are looking for ideas for presentations, I’d recommend this topic at one of your own local meetings.
Much like a real Data Center, Home Labs are a continual work in progress. Shiny new gear often sits by the warts. Replacing the old equipment with new gear usually correlates to how much time and money you wish to dedicate to the effort. I marvel at some setups by others in the industry. A few of the more recent ones to keep an eye on is the work Erik Bussink does with his high speed networking, and the cool setup Jason Langer has with his half height cabinet and rack mounted hosts all on 10GbE. Pretty funny considering how many companies are still running 3 hosts with 1GbE networking.
In my conversations with others in the community, I realized I didn’t have a post I could direct someone to when they would ask what I used in my own environment. Well, let me lay it out for you, as of February, 2015.
Primary vSphere cluster
2 hosts currently make up this cluster, and consist of the following:
- Lian LI PC-V351B chassis paired with a Scythe SY 1225SL 12L 120mm case fan.
- SuperMicro MBD-X9SCM-F-O LGA Motherboard with IPMI (a must!)
- Intel E3-1230 Sandy Bridge 3.2Ghz CPU (single socket, 4 physical cores)
- 32GB RAM
- Seasonic X series SS-400FL Power supply
- Qty 3, Intel E1G42ETBLK dual port NIC
- Mellanox MT25418 DDR 2 port InfiniBand HCA (10Gb per connection)
- 8GB USB drive (boot)
- 2TB SATA disk for local storage (testing)
- Qty 2. SATA based SSDs. (varies with testing)
At this time, a single host makes up this cluster, but intend to add a second unit.
- Intel NUC BOXD54250WYKH1 Intel Core i5-4250U
- Intel 530 240GB mSATA SSD
- Crucial 16GB Kit (2x8GB)
- Extra drive bay (for additional 2.5" SSD if needed)
The ATX style hosts have served quite well over the last 2 1/2 years. They are starting to show their age, but are quiet, and power efficient (read: low heat). Unfortunately they max out at just 32GB of RAM, which gets eaten up pretty quickly these days. The chassis started out very empty at first, but as I started to add SSDs and spinning disks for additional testing, InfiniBand cards, along with the occasional PCIe flash card or storage controller, I don’t have much room to spare anymore.
The Intel NUC is an interesting solution. In a vSphere Home Lab, the biggest constraints are that they are limited to 16GB of RAM, and a single 1GbE NIC. Since these units will serve as my management cluster, it should be fine, and it allows me to be more destructive on the primary two host cluster. They also fit into the small server rack quite nicely. I prefer the slightly thicker D54250WYKH as opposed to the traditional Intel D54250WYK model. It’s slightly thicker, but allows for an additional internal 2.5" drive. This offers a lot of flexibility if you wanted to keep some VMs on local storage, or possibly do some limited testing with host based caching. If they ever become too underpowered, they will always find use as a media server or workstation.
Most of my networking needs flow through a Cisco SG300-20. This is a feature rich, layer 3 switch that I’ve written about in the past (Using the Cisco SG300-20 Layer 3 switch in a home lab) I’ve used up all 20 ports, and really need another one. However, with the advent of other good layer 3 switches out there, and with the possibility of eventually moving my Lab to 10GbE, I’ve been making do with what I have.
As noted on my post Testing Infiniband in the home lab with PernixData FVP I introduced InfiniBand as a relatively affordable way to test high speed interconnects between hosts. I avoid the need to have an InfiniBand switch because with only two hosts, I can simply directly connect them. They are only passing vMotion and PernixData FVP traffic, so there is no need worry about routing. A desire to add a 3rd or 4th host gets complex, as I’d have to take the plunge and invest in an IB switch. (loud and not cheap).
Persistent storage comes from a Synology DS1512+ and a Synology DS1514+ NAS. unit. Both are 5 bay units, and have a mix of spinning disk, and SSDs. The primary difference being that the DS1514+ has four, 1GbE ports on it versus the older DS1512+ has two. One unit is used for housing the majority of my Lab VMs, and non-lab based file storage, while the other is used for experimentation and performance testing. Realistically I only need one Synology unit, but I was able to pick up the newer model for a great price, and I couldn’t refuse. My plan is to split out lab duties and general storage needs to the separate units.
Synology has seemed to have won the battle of storage in the home labs. Those who own them know that while they are a little pricey, they are well worth it, and offer so many other benefits beyond just serving up block or file storage for a vSphere cluster.
My luck with UPS units in the home has not been anything to brag about. It’s usually a case of looking like they work until you really need them. So far the best luck I’ve had is with the unit I’m currently using. It is a CyberPower 1500AVR. With the entire lab drawing around 200 watts, this means that there is only about a 25% load on the UPS.
A two shelf wire utility rack from Lowe’s fit the bill quite nicely. It is small, affordable ($25), and seemed to house the goofy form factor of the Lian Li ATX chassis. The only problem is that if I add a another ATX style host, I may have to come up with a better rack solution.
While I had a good lab environment to test with, up until a few months ago, the workstation sitting next to the lab was old, tired, and no longer functional. I found myself not even using it. So I replaced it with an Intel NUC as well. There is a bit of a price premium when buying the NUC, but the form factor, performance, simplicity, and power consumption all make it a no-brainer in my book. The limitations it has as a vSphere host (single NIC, and 16GB of RAM max) is not an issue when used as a workstation. It performs great, and powers a dual Monitor setup really well.
What it looks like
Standing at just 35” high, you can see that it is pretty self contained.
A second Intel NUC to serve as a 2 node Management cluster.(Done. See here)
- 10GbE switch. My primary hesitation on this is cost, and noise.
- New hosts. I tempted to go the route of a 2U rack mounted chassis so that I can go to three or four hosts more efficiently. With SuperMicro offering some motherboards with a built-in 10GbE port, that is pretty enticing.
- New gateway. As the lab grows more sophisticated, the more the network topology looks like a small production environment. That is why a proper router/firewall on this wish list.
- New wireless AP. Not technically part of the Home Lab, but plays an important role for obvious reasons. I need a wireless AP that is not prone to memory leaks and manual reboots every three or four days.
- Affordable PCIe based flash is really making inroads in the enterprise, but it’s still not affordable enough for the home. I hope this changes, as PCIe avoids so many headaches with flash that runs through a traditional storage controller.
The Home Lab Road Map / Wish List
When you have a Home Lab, you have plenty of time to think about what you want next. The "what you have" is never quite the same as the "what you want." So here is the path I’ll probably be taking:
Lessons learned over the years
A few takeaways have come from spending many hours working with my Home Lab. These reflect personal preferences more than anything, but they might save you some effort along the way as well.
1. The best Home Lab is the one that you use. For quite some time, I used a nested lab on a burley laptop, in addition to the physical setup. Ultimately the physical Home Lab won out because it fit more of what I wanted to test and work on. If my interests were more focused on scripting or workflow automation, perhaps a nested lab would be fine. But I’m a bit too much of a gear-head, and my job now focuses in performance on top of real hardware. I also didn’t care to power up and power down the entire nested lab each time I wanted to work on the laptop.
2. While "lab" implies all things experimental, it is common to have a desire for some services to be running all the time. Perhaps your lab has some responsibilities as a media server. Or in my case, it also runs my Horizon View environment that I use for remote access. This makes the idea of tearing down a lab on a whim a bit more complex. It’s where a Management cluster can come in handy. Having it physically segregated helps to keep things operational when you want to do a complete rebuild, or experiment with a beta version of vSphere.
3. I stay away from the cheap SSDs. They have no place in a real Data Center, and aren’t much better in the home. When it comes to flash, you get what you pay for. And sometimes, even when you pay, you still don’t get good performing SSDs. Spend your money wisely. Buying something multiple times over doesn’t save much money in the end. And remember, controllers matter too.
4. Initially I wanted to configure an arrangement that consumed as little power as possible. Keeping the power down means keeping the heat generated down, and thus the noise. Since my entire sits just an arms-length from where I work, it was important in the beginning, and important now. The entire setup draws about 200 watts of power and makes 38dB of noise 3 feet away. I’ve refused to add anything loud or hot, and if I’m forced to, the lab will have to be relocated into a new area.
5. There is always a way to do things a little cheaper. But consider what your time is worth, and remember the reason why you have a Home Lab in the first place. That has driven several of my purchasing decisions, and helps remove some of the petty obstacles that can sidetrack the best of us from working on what we intended to.
6. While some technologies and practices trickle down from production environments to the Home Lab. Sometimes the opposite happens. Two good examples of this might be the use of the VCSA (vSphere 5.5 or later), and letting ESXi run on a USB or MicroSD card. And that is the beauty of a lab. It invites experimentation, and filters out what looks good on paper, versus what actually works. Keep an open mind, and use it for what it is good for; making mistakes, and learning .
Thanks for reading
2 thoughts on “My vSphere Home Lab. 2015 edition”