VDI for me. Part 3
March 20, 2012 2 Comments
In VDI for me. Part 2, I left off with how VMware View was going to be constructed in my environment. We are almost at the point of installing and configuring the VMware View components, but before that is addressed, the most prudent step is to ensure that the right type of traffic can communicate across the different isolated network segments. This post is simply going to focus on the security rules to do such a thing. For me, access to these segments are managed by a Celestix MSA 5200i, 6 port Firewall running Microsoft ForeFront Threat Management Gateway (TMG) 2010. While the screen captures are directly from TMG, much of the information here would apply to other security solutions.
Since all of the supporting components of VMware View will need to communicate across network segments anyway, I suggest making accommodations in your firewall before you start building the View components. Sometimes this is not always practical, but in this case, I found that I only had to make a few adjustments before things were working perfectly with all of the components.
My network design was a fairly straightforward, 4 legged topology. (a pretty picture of this can be seen in Part 2)
|External||All users connecting to our View environment.|
|LAN||View connection server dedicated for access from the inside.
View connection server dedicated for communication with the Security Server.
Systems running the View agent software.
|DMZ1||Externally facing View “Security Server”|
|DMZ4||vSphere Management Network. vCenter, and SQL databases providing services for vCenter, and View Composer.|
For those who have their vSphere Management network on a separate network by way of a simple VLAN, your rules will be simpler than mine. For clarity, I will just show the rules that are used for getting VMware View to work.
Before you get started, make sure you have planned out all of the system names and IP addresses of the various Connection Servers, VM’s running the View Agent. It will make the work later on easier.
Creating Custom Protocols for VMware View in TMG 2010
In order to build the rules properly, you will first need to define some “User-Defined” protocols. For the sake of keeping track of all of the user defined protocols, I always included the name “View” (to remember it’s purpose), the direction, type, and the port number. Here was the list (as I named them) that was used as a part of my rule sets.
VMware View Inbound TCP&UDP (4172)
VMware View Outbound (32111)
VMware View Outbound (4001)
VMware View Outbound (8009)
VMware View Outbound (9427)
VMware ViewComposer Inbound (18443)
VMware ViewComposer Outbound (18443)
VMware ViewPCoIP Outbound (4172)
VMware ViewPCoIP SendReceiveUDP (4172)
Page 19 of the VMware View Security Reference will detail the ports and access needed. I appreciate the detail, and it is all technically correct, but it can be a little confusing. Hopefully, what I provide will help bridge the gap on anything confusing in the manual. My implementation at this time does not include a View Transfer Server, so if your deployment includes this, please refer to the installation guide.
Creating Access Rules for VMware View in TMG 2010
The next step will be to build some access rules. Access rules are typically defining access in a From/To arrangement. Here are what my rules looked like for a successful implementation of VMware View in TMG 2010.
Creating Publishing rules for VMware View in TMG 2010
In the screen above, near the bottom, you see two Publishing rules. These are for the purposes of securely exposing a server that you want visible to the outside world. In this case, that would be the View Security Server. The server will still have its private address as it resides in the DMZ, but would take on one of the assigned public IP addresses bound to the external interface of the TMG appliance. To make View work, you will need two publishing rules. One for HTTPS, and the other for PCoIP. A View session with the display setting of RDP will use only the HTTPS publisher. A View session with the display setting of PCoIP will use both of the publishing rules. Page 65 of the View 5 Architecture Planning Guide illustrates this pretty well.
In the PCoIP publishing rule, notice how you need both TCP and UDP, and of course, the correct direction.
My friend Richard Hicks had some great information on his ForeFront TMG blog that was pertinent to this project. ForeFront TMG 2010 Protocol Direction Explained is a good reminder of what you will need to know when defining custom protocols, and the rule sets that use them. The other was the nuances of using RDP with the “Web Site Publishing Rule” generator. Let me explain.
TMG has a “Web Site Publishing Rule” generator that allows for a convenient way of exposing HTTP and HTTPS related traffic to the intended target. This publisher’s role is to protect by inspection. It terminates the session, decrypts, inspects, then repackages for delivery onto its destination. This is great for many protocols inside of SSL such as HTTP, but protocols like RDP inside SSL do not like it. This is what I was running into during deployment. View connections using PCoIP worked fine. View connections using RDP did not. Rich was able to help me better understand what the problem was, and how to work around it. The fix was simply to create a “Non-Web Server Protocol Publishing Rule” instead, choosing HTTPS as the protocol type. For all of you TMG users out there, this is the reason why I haven’t described how to create a “Web Listener” to be used with a traditional “Web Site Publishing Rule.” There is no need for one.
A few tips in with implementing you’re your new firewall rules. Again, most of these apply to any Firewall you choose to use.
1. Even if you have the intent of granular lockdown (as you should), it may be easiest to initially define the rule sets a little broader. Use things like entire network segments instead of individually assigned machine objects You can tighten the screws down later (remember to do so), and it is easier to diagnose issues.
2. Watch those firewall logs. Its easy to mess something up along the way, and your real time firewall logs will be your best friend. But be careful not to get too fancy with the filtering. You may be missing some denied traffic that doesn’t necessarily match up with your filter.
3. You will probably need to create custom protocols. Name them in such a way that they are clear that they are an incoming or outgoing protocol, and perhaps whether they are TCP, or UDP. Otherwise, it can get a little confusing when it comes to direction of traffic. Rule sets have a direction, as do the protocols that are contained in them.
4. Stay disciplined to rule set taxonomy. You will need to understand what the rule is trying to do. Consistency is key. You may find it more helpful to name the computer objects the role that they are playing, rather than their actual server name. It helps with understanding the flow of the rules.
5. Add some identifier to your rules defined for View. That way, when you are troubleshooting, you can enter “View” in the search function, and it quickly shows you only the rule sets you need to deal with.
6. Stick the the best practices when it comes to placement of the View rules into your overall rule sets. TMG processes the rules by order, so there is some methods to make the processing most efficient. They remain unchanged from it’s predecessor, ISA 2006. Here is a good article on better understanding the order.
7. TMG 2010 has a nice feature of grouping rules. This gives the ability of a set of contiguous rules to be seen as one logical unit. You might find this helpful in most of your View based rule creation. I would probably recommend having your access rules for View in a different group than your publishing rules. This is so that you can maintain best practices on placement/priority of rule types.
8. When you get to the point diagnosing what appear to be connection problems between clients, agents, and connection servers, give VMware a call. They have a few tools that will help in your efforts. Unfortunately, I can’t provide any more information about the tools at this time, but I can say that for the purposes of diagnosing connectivity issues, they are really nice.
I also stumbled upon an interesting (and apparently little known) issue when you have a system with multiple NICs that is also running the View agent. For me, this issue arose on the high powered physical workstation with PCoIP host card, using View as the connection broker. This system had two additional NICs that connected to the iSCSI SAN. The PCoIP based connections worked, but RDP sessions through View failed, even when standard RDP worked without issue. Shut off the other NICs, and everything worked fine. VMware KB article 1026498 addresses this. The fix is simply adding a registry entry
On the host with the PCoIP card, open regedit and add the following entry:
HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\Node Manager
Add the REG_SZ value:
If you experience issues connecting to one system running the View Agent, but not the other, a common practice is to remove and reinstall the View Agent. Any time the VMware Tools on the VM are updated, you will also need to reinstall the agent.
More experimentation and feedback
As promised in part 1 of this series, I wanted to keep you posted of feedback that I was getting from end users, and observations I had along the way.
The group of users allowed to connect continued to be impressed to the point that using it was a part of their workday. I found myself not being able to experiment quite the way I had planned, because users were depending on the service almost immediately. So much for that idea of it being a pilot project.
The experimentation with serving up a physical system with PCoIP using VMware as a connection broker has continued to be an interesting one. There are pretty significant market segments that demands high powered GPU processing. CAD/CAE, visualization, animation, graphic design, etc have all historically relied on client side GPUs. So it is a provocative thought to be able to serve up high powered graphics workstations without it sitting under a desk somewhere. The elegance of this arrangement is that once a physical system has a PCoIP host card in it, and the View Agent installed, it is really no different than the lower powered VM’s served up by the cluster. Access is the same, and so is the experience. Just a heck of a lot more power. Since it is all host based rendering, you can make the remote system as powerful as your budget allows. Get ready for anyone who accesses a high powered workstation like this to be spoiled easily. Before you know it, they will ask if they can have 48GB of RAM on their VM as well.
Running View from any client (Windows Workstation, Mac, Tablets, Ubuntu, and a Wyse P20 zero client) proved to give basically the same experience. It was easy for the end users to connect. Since I have a dual name space (“view.mycompany.com” from the outside, and “view.local.lan” from the inside), the biggest confusion has been for laptop users remembering which address to use. That, and reminding them to not use the VPN to connect. A few firewall rules blocking access will help guide them.
One of my late experiments came after I met all of my other acceptance criteria for the project. I wanted to see how VMware View worked with linked clones. Setting up linked clones was pretty easy. However, I didn’t realize until late in the project that a linked clone arrangement of View really requires you to run a Microsoft KMS licensing server. Otherwise, your trusty MAK license keys might be fully depleted in no time. There is a VMware KB Article describing a possible workaround, but it also warns you of the risks. Accommodating for KMS licensing is not a difficult matter to address (except for extremely small organizations who don’t qualify for KMS licensing), but it was something I didn’t anticipate.
I had the chance to do this entire design and implementation not once, but twice. No, it wasn’t because everything blew up and I had no backups. My intention was to build a pilot out at my Primary facility first, then build the same arrangement (as much as possible) at the CoLocation facility. What made this so fast and easy? As I did my deployment at my Primary facility, I did all of my step by step design documentation in Microsoft OneNote; my favorite non-technical application. Step by step deployment of the systems, issues, and other oddball events were all documented the first time around. It made the second go really quick and easy. Whether it be my Firewall configuration, or the configurations of the various Connection Servers, the time spent documenting paid off quickly.
Next up, I’ll be going over some basic configuration settings of your connection servers, and maybe a little tuning.