February 12, 2012 6 Comments
In VDI for me. Part 1, I described a little bit about our particular interests and use cases for VDI, and how we wanted to deploy a pilot project to learn more about it. So now what? Where does a person start? Well, deployment guides and white papers for VMware View will tell you all of the possibilities. They are thorough, but perhaps a bit tough to decipher when trying to understand what a simple VDI arrangement might look like. So I’m just going to focus on how I built out a simple arrangement of VMware View, and how the components of VDI listed in my first post fit together.
Topology of VMware View
The Topology of View should fit in very nicely with what what you already have for a virtualized infrastructure. If you are already running a vSphere cluster in your environment, its really just a matter of building your supporting components, which in a very simple or small deployment might be an instance of Composer, and a View Connection server or two.
VMware deployment guides will have all of the details of these components, so I will try to keep it brief.
Connection Server. This is the VM that is the broker that your View Client software will connect to, and will be available to users inside your network to connect to (e.g. “view.corp.lan”). This is a domain joined system, and should probably reside on your primary LAN network.
Security Server. This is another VM that is running the Connection Server component but in a slightly different mode. It’s purpose is to act as the secure proxy for external clients to connect to your internal systems presented by View. It should only be used for clients outside of your network to connect to (e.g. “view.yourcompany.com”). This server should not be joined to the domain, and should reside in a secure DMZ segment of your network that you have full granular control of ingress and egress traffic.
Composer. This is the server that is behind the magic of using a minimal storage footprint to issue out many Virtual Desktops – by way of linked clones. For my simple deployment, I chose to install it on my VM running vCenter. I also chose to stick to the basics when it came to configuring Composer, as I really wanted to focus on the behavior of the View Client itself, not experiment how best to deploy hundreds of desktops with linked clones. Composer needs a SQL Server, so you can use the database server that serves up vCenter, or something else if you wish.
Transfer Server. This is a dedicated server that is used for “offline mode” laptop users or “Bring your Own PC” (BYoPC) arrangements. Its optional, but gives the ability for a laptop user who normally uses a VDI VM that traditionally lives on the infrastructure, to be “checked out” to the laptop. It uses all of the local resources of the laptop, and when/if it has the ability to phone home via an internet connection, it can send and update of the VM. It is an optional feature, but a compelling one for those who might ask, “but what if I need my Virtual Desktop, and I have no internet connection?” or if you are considering a BYOPC arrangement. It does not have to be joined to the domain, and its placement depends on the use case.
View Agent. This is the tiny bit of software installed on a system that you want VMware View to present to a user. In its simplest form, it can just be loaded onto an existing VM, or something served up from Composer. If you need really high end horsepower with perhaps intensive graphics, it can be installed on a physical workstation with a PCoIP host card, so that VMware View can serve it up just like any other resource (more on this in a later post). Right now, the agent software can only be installed on a Windows based system.
View Client. This is the application used by the end user to connect to their Virtual Desktop. Currently the official client software is limited to Windows based machines, and iPads. However, there is a Technology Preview edition for Mac OSX, and some distributions of Linux. Note that if you are using a PCoIP Zero client, such as a Wyse P20, there is no software that you have to install on it. Just connect it to the network, and connect to the internal View Connection Server, and you are good to go.
Interestingly enough, If you have a need to access your VDI systems from outside of your LAN, (thus requiring you to run the Security Server in a DMZ), you will need a separate/additional VM (domain joined) on the inside of your LAN that is running the Connection Server software, and is used exclusively by the Security Server. The reason it cannot use the Connection Server above that was designated for internal use is that there are toggles that must be set based on the type of connection that is going to occur. These toggles are mutually exclusive. (I’ll address this more on later posts) You can try to use just one Connection Server coupled with a Security Server for both internal and external access, but traffic will not flow efficiently, and your performance will be affected by this.
I think VMware might need to review their architecture of the Connection Servers. Having three VM’s (which doesn’t even include the View Transfer Server) serve up internal and external access probably doesn’t seem like much when you have hundreds of clients, but for a pilot or a smaller deployment, it is overkill, and could be reduced to at least two with some tweaks in the Connection Server software. Perhaps it wouldn’t be that big of a deal if they were just VM appliances. (hint hint…)
I chose the “View Premier Starter Pack” which included all of the things needed to get started. View Composer, ThinApp, View Persona Management, vShield Endpoint, and “View Client for Local Mode” Also included is a license of vSphere Enterprise Plus for Desktop Standalone licenses, and vCenter (special licensing considerations apply). The result is that you get to try out VDI on a single vSphere host, with up to 10 VDI clients, and all of the tools necessary to experiment with the features of View. From there, you would just purchase additional client licenses or vSphere host licenses as necessary. It is really a great bundle, and an easy way to begin a pilot project with everything you need.
Placement of systems
More than likely, you will be presenting these systems to the outside world for some of your staff. This is going to involve some non-VMware related decisions and configurations to get things running correctly. Modifications will need to be made with your Firewall, DNS, SQL, vSphere, and AD. If you are a smaller organization, the Administrator may be the same person (you). Daunting, but at least you cut out the middle man. That’s the way I like to think of it. Nevertheless, you will want to at least exercise some care in the placement of systems to minimize performance issues, making sure traffic is running efficiently, in a secure manner.
There is going to be quite a bit of communication between the Connection servers, vCenter, the systems running the agent, and the clients running the View Client. VMware publishes a nice document on all of the ports needed for each type of connection. It’s all technically there, but does take a little time to decipher. I stared at this document endlessly, while keeping a close I on my real-time firewall logs as I attempted to make various connections. I really think I needed the two in order to get it working.
As mentioned earlier, you should really have the Security Manager VM sitting off in a protected network segment off of your firewall. For me, my primary edge security device is a Microsoft ForeFront Threat Management Gateway (TMG) Firewall software running on a nicely packaged appliance by Celestix Networks. I’ve said it before, and I’ll say it again. It is such a shame that Microsoft does not get its due credit on this. My security friends in high places have consistently stated that there is simply nothing better when it comes to robust, flexible protection across the protocol and application stacks than TMG. I feel fortunate to be working with the good product that it is. The information that I share throughout this series will be based on use of TMG, but the overall principals will be similar to other edge security solutions.
How much you need to modify your firewall depends on how you place some of these systems. Since these View related components will be interacting with vCenter, you will have to plan accordingly. Some vSphere environments simply have their vSphere Management Network (aka Service Console network) simply behind a VLAN and routed directly to the LAN, while others have it on a separate network leg only accessible by a physical router or firewall. I fall in the latter category, as my old switches never had the capability to do interVLAN routing. Happily, these old switches have since been replaced, but I my vSphere Management network stills resides in an isolated network leg, and my Firewall settings for VMware View will accommodate for that.
While I wanted to deploy the systems correctly, and minimize floundering, I also wanted to see some early results. After I built up the systems to Manage the environment, and worked out the connectivity issues, I thought I’d go ahead and install the View Agent on a few of the existing VMs used by our Software Developers located on another continent. These Developers have played a very important role to the companies development efforts, so anything I could do to improve their user experience was a plus. Previously, their work environment consisted of their own laptop connected to our network via a PPTP based VPN, then using RDP to connect up to a VM provisioned for them. Their VM has all of their Development applications and tools, which allows for the data to be kept in the datacenter. Oh, and their typical latency? Around 280ms on average. Yeah, their connection is that terrible.
So my first tests really involved nothing more than installing the VMware View agent on their VM, then giving them the web address to download the VMware View Client, and a few steps on how to connect. I also reminded them not to log into the VPN, as that was no longer necessary. I asked them to try both RDP and PCoIP over their connection, along with some anecdotal feedback whenever they had a chance.
- Within 20 minutes they responded saying things like, “At first glance it works awesome” and “Much much faster than VPN + RDP.”
- That same day they told the rest of their team members about it, and during a Development review, stated, “please don’t take it away!”
- Further feedback included comments like “This feels at least 10x faster than the older method” along with “I wouldn’t dream of playing a video on the old RDP, and rotating a 3D view in our software would’ve made you want to bang your head against a wall. This [new] way is great.”
- Their impressions over their highly latent connection was that using the View client with PCoIP was much more responsive than the View client with RDP.
What is really interesting was that I didn’t make a single modification to tune PCoIP, nor did I adjust their VMs in any way. That is the power of a new approach to remote display rendering. I do not know the exact thresholds that Teradici (the makers of the PCoIP protocol) were shooting for when it came to latency, but I’m willing to bet that nearly 300ms of latency was outside of their typical use case. It is quite the compliment to the power of PCoIP. Needless to say, once it was tested, I couldn’t bear to take this away from the Remote Developers, so I did my best to work my deployment around their work days. This was really exciting because these were just normal VM’s that I was serving up. I didn’t have a chance at this point to serve up the high end workstation with the VMware View Agent installed on it – something I was really looking forward to.
Some more lessons and observations
I’ve witnessed what others have experienced. It is far too easy to let a pilot project turn into a production environment. FAST. With reactions like I shared above, it happens right before your eyes. The neat features that were demonstrated were must-haves almost overnight. No wonder so many small VDI deployments ultimately suffer in some form or another as they grow. So if you are contemplating even a pilot deployment, keep this in mind. I’m doing my best to contain expectations until I have one of the key ingredients in place; fast storage targeted for IOPS hungry VDI deployments. For me this means a Dell EqualLogic PS6100XV hybrid array. Until that time, I am not going to even consider expanding this beyond the 10 instances that comes with the View Premier Bundle.
It was nice to see that during my early deployment, that VMware released a “Technology Preview” edition of the View 5.0 Client for Mac OSX, as well as a version for Ubuntu. It is a nice step forward, but it doesn’t go far enough. A fully supported View Client needs to be provided for Redhat clones, and debian based linux distributions. Likewise, the View Agent (the bit of software installed on the VM or the physical workstation that makes the system available via View) is limited to only Windows Operating Systems. It is my position that having an agent for Linux distributions would serve VMware quite well – much better than they even know. I’m hoping this is in their feature backlog.
Next up, I’ll be going over some typical View Administrator configuration settings, Firewall settings to get View and PCoIP flowing correctly, as well as my experimentation with a physical workstation with a PCoIP host card and a VMware View Agent installed so that it can be brokered by VMware View.
A recently released post by VMware describing exactly the benefit with hardware based systems that we are trying to exploit.
VMware View 5.0 Documentation Portal
A comprehensive guide to ports needed to be opened with View 5 and it’s related services
The readme for the View Client Tech Preview for Mac OSX
EqualLogic hybrid arrays in VDI testing