VDI for me. Part 5 – The wrap up


Now that VMware View is up and running, you might be curious to know how it is working.  Well, you’re in luck, because this post is about how View worked, and what was learned from this pilot project.  But first, here is a quick recap of what has been covered so far.

VDI for me. Part 1 – Why the interest 
VDI for me. Part 2 – The Plan
VDI for me. Part 3 – Firewall considerations
VDI for me. Part 4 – Connection Servers and tuning

I was given the opportunity to try VMware View for a few different reasons (found here).  I wasn’t entirely sure what to expect, but was determined to get a good feel for what VDI in 2012 could do.  Hopefully this series has helped you gain an understanding as well. 

The user experience
Once things were operational, the ease and ubiquity of access to the systems was impressive.  One of our most frequent users often stated that he simply forgot where the work was actually being performed.  Comments like that are a good indicator of success.  From a remote interaction standpoint, the improvements most often showed up where it was really needed; remote display over highly latent connections, with convenience of access.  Being able to access a remote system from behind one corporate network to another was as productive as it was cool. 

It was interesting to observe how some interpreted the technology.  Some embraced it for what it was (an appliance to be more productive), while others chose to be more suspicious.  You may have users who complain about their existing computers, but are apprehensive at the notion of it being taken away for something that isn’t tangible.  Don’t underestimate this emotional connection between user and computer.  It’s a weird, but very real aspect of a deployment like this. 

Virtualization Administrators know that good performance is often a result of a collection of components (storage, network, CPU, hypervisor) working well together through a good design.  Those of us who have virtualized our infrastructures are accustomed to this.  Users are not.  As VMs become more exposed to the end users (whether they be for VDI, or other user-facing needs), your technical users may become overly curious by what’s “under the hood” with their VM.  This can be a problem.  Comparisons between their physical machine and the VM are inevitable, and they may interpret a VM with half the processors and RAM as their physical machine to provide only half of the experience.  You might even be able to demonstrate that the VM is indeed better performing in many ways, yet the response might be that they still don’t have enough RAM, CPU, etc.  The end user knows nothing about hypervisors or IOPS, but they will pay attention to some of the common specifications general consumers of technology have been taught to care about; RAM and CPUs.

So in other words, there will be aspects of a project like this that have everything to do with virtualization, yet nothing to do with virtualization.  It can be as much of a people issue as it is a technical issue.

Other Impressions
The PCoIP protocol is very nice, and really shines in certain situations. I love the fact that it is a tunable, non-connection oriented protocol that leaves all of the rendering up to the host. It just makes so much sense for remote displays. But it can have characteristics that make it feel different to the end user. The old “window shake” test might redraw itself slightly different than in a native display, or using something like RDP. This is something that the user may or may not notice.

The pilot program included the trial of a PCoIP based Zero Client. The Wyse P20 didn’t disappoint. Whether it was connecting to a VM brokered by View, or a physical workstation with a PCoIP host card brokered by View, the experience was clean and easy. Hook up a couple of monitors to it, and turn it on. It finds the connection server, so all you need to do is enter your credentials, and you are in. The zero client was limited to just PCoIP, so if you need flexibility in that department, perhaps a thin client might be more appropriate for you. I wanted to see what no hassle was really like.

As far as feedback, the top three questions I usually received from users went something like this:

“Does it run on Linux?”

“How come it doesn’t run on Linux?”

“When is it going to run on Linux?”

And they weren’t just talking about the View Client (which as of this date will run on Ubuntu 11.04), but more importantly, the View Agent.  There are entire infrastructures out there that use frameworks and solutions that run on nothing but Linux.  This is true especially in arenas like Software Development, CAE and Scientific communities.  Even many of VMware’s offerings are built off of frameworks that have little to do with Windows.  The impression that the supported platforms of View gave to our end users was that VMware’s family of solutions were just Windows based.  Most of you reading this know that simply isn’t true.  I hope VMware takes a look at getting View agents and clients out for Linux.

Serving up physical systems using View as the connection broker is an interesting tangent to the whole VDI experience.  But of course, this is a one user to one workstation arrangement – its just that the workstation isn’t stuffed under a desk somewhere.  I suspect that VMware and its competitors are going to have to tackle the problem of how to harness GPU power through the hypervisor so that all but the most demanding of high end systems can be virtualized.  Will it happen with specialized video cards likely to come from the VMware/NVIDIA partnership announced in October of 2011?  Will it happen with some sort of SR-IOV?  The need for GPU performance is there.  How it will be a achieved, I don’t know.  In the short term, if you need big time GPU power, a physical workstation with a PCoIP host card will work fine.

The performance and wow factor of running a View VM on a tablet is high as well.  If you want to impress anyone, just show this setup on a tablet.  Two or three taps on the tablet and you are in.  But we all know that User Experience (UX) designs for desktop applications were meant for a large screen, mouse, and a keyboard.  It will be interesting to see how the evolution of these technologies continue, so that UX can hit mobile devices in a more appropriate way.  Variations of application virtualization is perhaps the next step.  Again, another exciting unknown.

Also a worthwhile note is competition, not only in classically defined VDI solutions, but access to systems.  A compelling aspect of using View is that it pairs a solution for remote display, and brokering secure remote access into one package.  But other competing solutions do not necessarily have to take that approach.  Microsoft’s “Direct Access” allows for secure RDP sessions to occur without a traditional VPN.  I have not had an opportunity yet to try their Unified Access Gateway (UAG) solution, but it gets rave reviews from those who implement it, and use it.  Remote Desktop Session Host (RDSH) in Windows Server 8 promises big things (if you only use Windows of course).

Among the other challenges is how to implement such technologies in a way that is cost effective.  Up front costs associated with going beyond a pilot phase might be a bit tough to swallow, as technical challenges such as storage I/O deserve attention.  I suspect with the new wave of SSD and SSD hybrid SAN arrays out there, that it might make the technical and financial challenges more palatable.  I wish that I had the opportunity to demonstrate how well these systems would work on an SSD or hybrid array, but the word “pilot” typically means “keep the costs down.”  So no SSD array until we move forward with a larger deployment.

There seems to be a rush by many to take a position on whether VDI is the wave of the future, or a bust that will never happen.  I don’t think its necessary to think that way.  It is what it is; a technology that might benefit you or the business you work for, or it might not.  What I do know is that it is rewarding and fun to plan and deploy innovative solutions that help end users, while addressing classic challenges within IT.  This was one of those projects.

Those who have done these types of implementations will tell you that successful VDI implementations always pay acute attention to the infrastructure, especially storage.  (Reading about failed implementations seems to confirm this).  I believe it.  I was almost happy that my licensing forced me to keep this deployment small, as I could focus on the product rather than some of the implications with storage I/O that would inevitably come up with a larger deployment.  Economies of scale makes VDI intriguing in deployment and operation.  However, it appears to be that scaling is the tricky part. 

What might also need a timely update is Windows licensing.  There is usually little time left in the day to understand the nuances of EULAs in general – especially Windows licensing.  VDI adds an extra twist to this.  A few links at the end of this post will help explain why.

None of these items above discount the power and potential of VDI.  While my deployments were very small, I did get a taste of its ability to consolidate corporate assets back to the data center.  The idea of provisioning, maintaining, and protecting end user systems seems possible again, and in certain environments could have a profound improvement.  It is easy to envision smaller branch office greatly reducing, or eliminating servers at their location.   AD designs simplify.  Assets simplify, as does access control – all with providing a more flexible work environment.  Not a bad combination.

Thanks for reading.

Helpful Links:
Two links on Windows 7 SPLA and VDI

RDSH in Windows Server 2008

VDI has little to do with the Desktop

Scott Lowe’s interesting post on SR-IOV

Improving density of VMs per host with Teradici’s PCoIP Offload card for VMware View

VDI for me. Part 4


Picking up where we left off in VDI for me. Part 3, we are at a point in which the components of View can be installed and configured.  As much as I’d like to walk you through each step, and offer explanations at each point, sticking to abbreviated steps is a better way to help you understand how the pieces of the puzzle fit together.  Besides, others have great posts on installing and configuring the View Connection servers, not to mention the VMware documentation, which is quite good.  The links at the end of the post will give you a good start.  My focus will be to hit on the main areas to configure to get View up and running.

Here is the relationship between the Connection Servers, the clients, and the systems running the agents in my environment.  The overall topology for my View environment can be found in VDI for me. Part 2.

For clients accessing View from the Internal LAN


For clients accessing View from offsite locations


Overview of steps
This is the order I used for deploying the View Components.  To simplify, you may wish to skip steps 3 and 4 until you get everything working on the inside of your network. 

  1. Install Composer on your vCenter Server.
  2. Build a VM and install View Connection Server intended for local LAN access only.
  3. Build a VM and install View Connection Server intended to talk with Security Server.
  4. Build a VM and install View Security Server in your DMZ.
  5. Install View Agent on a VM.
  6. Create a Pool for the VM, and entitle the pool to an user or group in AD.
  7. Connect to the VM using the View Client.

Configuring your first Connection Server (For Internal Access)
From the point that your first connection manager is installed, you may begin the configuration.

  1. Browse out to VMware View Administrator portal on your connection server (https://[yourconnectionserver]/admin) and enter the appropriate credentials.
  2. Drill down into View Configuration > Product Licensing and Usage > Edit License to add your license information.
  3. Register your vCenter Server by going to View Configuration > Servers > Add.  Fill out all of the details, but do not click “Enable View Composer” quite yet.  Click OK to exit.
  4. Go back into Edit the vCenter server configuration, and click “Enable View Composer and Click OK to exit.
  5. In the area where the listing of View Connection servers are listed, select the only View Connection Server on the list, and click “Edit”.  You will want to make sure both check boxes are unchecked, and use internal FQDN and IP addresses only.


Configuring your Second Connection Server (to be paired with Security Server)
During the installation of View on the second server, it will ask what type of Connection Server it will be.  Choose “Replica” from the list, and type in the name of your first Connection Server.

  1. Browse out to the View Administrator Portal, and you will now see a second connection server listed.  Highlight it, and click on Edit.
  2. Unlike the first connection server, this connection server needs to have both checkboxes checked.


Configuring your Security Server (to be paired with your second Connection Server)
Just a few easy steps will take care of your Security Server.

  1. Browse out to the View Administrator portal, highlight the Connection Server you want to pair with the security server, and click More Commands > Specify Security Server Pairing Password.
  2. Install the Connection Server install bits onto your new Security Server.  Choose “Security Server” for the type of Connection Server it will be.  It will then prompt you to enter the internal Connection Server to pair it to.  This is the internal FQDN of the server Connection Server.
  3. Enter the View pairing password established in step 1.  This will make the Security Server show up in the View Administrator Portal.
  4. Go back to the View Administrator portal, highlight the server that is listed under the Security Server, and click Edit.  This is where you will enter in the FQDN desired.  The PCoIP address should be the publicly registered IP address.  In my case, it is the address bound to the external interface of my firewall, but your topology might dictate otherwise.



After it is all done, in the View Administrator portal, you should see one entry for a vCenter server, two entries for the View Connection servers, and one entry for a Security Server.


From this point, it is just a matter of installing the View Agent on the VMs (or physical workstation with a PCoIP host card) you’d like to expose, create a pool, entitle a user or group, and you should be ready to connect.

After you add the VMware View adm templates to Active Directory, a number of tunable settings will be available to you.  The good news in the tuning department is that while PCoIP is highly tunable, I don’t feel it has to be the first thing you need to address after the initial deployment.  With View 5, it works quite well out of the box.  I will defer to this post http://myvirtualcloud.net/?p=2061 on some common, View specific GPO settings you might want to adjust, especially in a WAN environment.  The two settings that will probably make the biggest impact are the “Maximum Frame Rate” settings, and the “Build to Lossless” toggle.  I applied these and a few others in order to accommodate our Development Team working on another continent deal with their 280+ms latency. 

The tools available to monitor, test, and debug PCoIP are improving almost by the day, and will be an interesting area to watch.  Take a look at the links for the PCoIP Log Viewer and the PCoIP Configuration utilities at the end of this post.

Tips and Observations
When running View, there is a noticeable increase on the dependence of vCenter, and the databases that support it and View Composer.  This is especially the case in smaller environments where the server running vCenter might be housing the vCenter database, and the database for View Composer.  Chris Wahl’s recent post Protecting the vCenter Database with SQL Log Shipping addresses this, and provides a good way to protect the vCenter databases through log shipping.  If you are a Dell EqualLogic user, it may be helpful to move your SQL DB and Log volumes off to guest attached volumes, and use their ASM/ME application to easily make snaps and replicas of the database.  Regardless of the adjustments that you choose to make, factor this in to your design criteria, especially if the desktops served up by View become critical to your business.

If your connection to a View VM terminates prematurely, don’t worry.  It seems to be a common occurrence during initial deployment that can happen for a number of reasons.  There are a lot of KB articles on how to diagnose them.  One that I ran across that wasn’t documented very much was that the VM may not have enough memory assigned to the video RAM.  The result can be that it works fine using RDP, but disconnects when using PCoIP.  I’ve had some VMs mysteriously reduce themselves back down to a default number that won’t support large or multiple screen resolutions.  Take a quick look the settings of your VM.  Once those initial issues have been resolved, I’ve found everything to work as expected.

In my mad rush to build out the second View environment at our CoLo, everything worked perfectly, except when it came to the View client validating the secured connection. All indicators pointed to SSL, and perhaps how the exported SSL certificate was applied to the VM running the Security Server. I checked, and rechecked everything, burning up a fair amount of time. It turned out it was a silly mistake (aren’t they all?). In C:\Program Files\VMware\VMware View\Server\sslgateway\conf there needs to be a file called locked.properties. This contains information on the exported certificate. Well, when I created the locked.properties file, Windows was nice enough to append the .txt to it (e.g. locked.properties.txt). The default settings in Windows left that extension hidden, so it didn’t show. By the way, I’ve always hated that default setting for hiding file extensions. It is controlled via GPO at my primary site, but didn’t have that set at the CoLo site.

Next up, I’ll be wrapping up this series with the final impressions of the project.  What worked well, what didn’t.  Perceptions from the users, and from those writing the checks.  Stay tuned.

Helpful Links
VMware View Documentation Portal.  A lot of good information here.

A couple of nice YouTube videos showing a step by step installation of View Composer

How to apply View specific settings for systems via GPO (written for 4.5, but also applies to 5.0)

PCoIP disconnect codes

PCoIP Log Viewer

PCoIP Configuration utility (beta)

More PCoIP tuning suggestions

VDI for me. Part 3


In VDI for me. Part 2, I left off with how VMware View was going to be constructed in my environment.  We are almost at the point of installing and configuring the VMware View components, but before that is addressed, the most prudent step is to ensure that the right type of traffic can communicate across the different isolated network segments.  This post is simply going to focus on the security rules to do such a thing.  For me, access to these segments are managed by a Celestix MSA 5200i, 6 port Firewall running Microsoft ForeFront Threat Management Gateway (TMG) 2010.  While the screen captures are directly from TMG, much of the information here would apply to other security solutions.

Since all of the supporting components of VMware View will need to communicate across network segments anyway, I suggest making accommodations in your firewall before you start building the View components.  Sometimes this is not always practical, but in this case, I found that I only had to make a few adjustments before things were working perfectly with all of the components.

My network design was a fairly straightforward, 4 legged topology. (a pretty picture of this can be seen in Part 2)

Network Leg Contains
External All users connecting to our View environment.
LAN View connection server dedicated for access from the inside. 
View connection server dedicated for communication with the Security Server. 
Systems running the View agent software.
DMZ1 Externally facing View “Security Server”
DMZ4 vSphere Management Network.  vCenter, and SQL databases providing services for vCenter, and View Composer.

For those who have their vSphere Management network on a separate network by way of a simple VLAN, your rules will be simpler than mine.  For clarity, I will just show the rules that are used for getting VMware View to work.

Before you get started, make sure you have planned out all of the system names and IP addresses of the various Connection Servers, VM’s running the View Agent.  It will make the work later on easier. 

Creating Custom Protocols for VMware View in TMG 2010
In order to build the rules properly, you will first need to define some “User-Defined” protocols.  For the sake of keeping track of all of the user defined protocols, I always included the name “View” (to remember it’s purpose), the direction, type, and the port number.  Here was the list (as I named them) that was used as a part of my rule sets.

VMware View Inbound TCP&UDP (4172)
VMware View Outbound (32111)
VMware View Outbound (4001)
VMware View Outbound (8009)
VMware View Outbound (9427)
VMware ViewComposer Inbound (18443)
VMware ViewComposer Outbound (18443)
VMware ViewPCoIP Outbound (4172)
VMware ViewPCoIP SendReceiveUDP (4172)

Page 19 of the VMware View Security Reference will detail the ports and access needed.  I appreciate the detail, and it is all technically correct, but it can be a little confusing.  Hopefully, what I provide will help bridge the gap on anything confusing in the manual.  My implementation at this time does not include a View Transfer Server, so if your deployment includes this, please refer to the installation guide.

Creating Access Rules for VMware View in TMG 2010
The next step will be to build some access rules.  Access rules are typically defining access in a From/To arrangement.  Here are what my rules looked like for a successful implementation of VMware View in TMG 2010.


Creating Publishing rules for VMware View in TMG 2010
In the screen above, near the bottom, you see two Publishing rules.  These are for the purposes of securely exposing a server that you want visible to the outside world.  In this case, that would be the View Security Server.  The server will still have its private address as it resides in the DMZ, but would take on one of the assigned public IP addresses bound to the external interface of the TMG appliance.  To make View work, you will need two publishing rules.  One for HTTPS, and the other for PCoIP.  A View session with the display setting of RDP will use only the HTTPS publisher.  A View session with the display setting of PCoIP will use both of the publishing rules.  Page 65 of the View 5 Architecture Planning Guide illustrates this pretty well.

In the PCoIP publishing rule, notice how you need both TCP and UDP, and of course, the correct direction.


My friend Richard Hicks had some great information on his ForeFront TMG blog that was pertinent to this project. ForeFront TMG 2010 Protocol Direction Explained is a good reminder of what you will need to know when defining custom protocols, and the rule sets that use them.  The other was the nuances of using RDP with the “Web Site Publishing Rule” generator.  Let me explain.

TMG has a “Web Site Publishing Rule” generator that allows for a convenient way of exposing HTTP and HTTPS related traffic to the intended target. This publisher’s role is to protect by inspection. It terminates the session, decrypts, inspects, then repackages for delivery onto its destination. This is great for many protocols inside of SSL such as HTTP, but protocols like RDP inside SSL do not like it. This is what I was running into during deployment. View connections using PCoIP worked fine. View connections using RDP did not. Rich was able to help me better understand what the problem was, and how to work around it. The fix was simply to create a “Non-Web Server Protocol Publishing Rule” instead, choosing HTTPS as the protocol type.  For all of you TMG users out there, this is the reason why I haven’t described how to create a “Web Listener” to be used with a traditional “Web Site Publishing Rule.”  There is no need for one.

A few tips in with implementing you’re your new firewall rules.  Again, most of these apply to any Firewall you choose to use.

1.  Even if you have the intent of granular lockdown (as you should), it may be easiest to initially define the rule sets a little broader.  Use things like entire network segments instead of individually assigned machine objects  You can tighten the screws down later (remember to do so), and it is easier to diagnose issues.

2.  Watch those firewall logs.  Its easy to mess something up along the way, and your real time firewall logs will be your best friend.  But be careful not to get too fancy with the filtering.  You may be missing some denied traffic that doesn’t necessarily match up with your filter.

3.  You will probably need to create custom protocols.  Name them in such a way that they are clear that they are an incoming or outgoing protocol, and perhaps whether they are TCP, or UDP.  Otherwise, it can get a little confusing when it comes to direction of traffic.  Rule sets have a direction, as do the protocols that are contained in them.

4.  Stay disciplined to rule set taxonomy.  You will need to understand what the rule is trying to do.  Consistency is key.  You may find it more helpful to name the computer objects  the role that they are playing, rather than their actual server name.  It helps with understanding the flow of the rules.

5.  Add some identifier to your rules defined for View.  That way, when you are troubleshooting, you can enter “View” in the search function, and it quickly shows you only the rule sets you need to deal with.

6.  Stick the the best practices when it comes to placement of the View rules into your overall rule sets.  TMG processes the rules by order, so there is some methods to make the processing most efficient.  They remain unchanged from it’s predecessor, ISA 2006.  Here is a good article on better understanding the order.

7.  TMG 2010 has a nice feature of grouping rules.  This gives the ability of a set of contiguous rules to be seen as one logical unit.  You might find this helpful in most of your View based rule creation.  I would probably recommend having your access rules for View in a different group than your publishing rules.  This is so that you can maintain best practices on placement/priority of rule types.

8.  When you get to the point diagnosing what appear to be connection problems between clients, agents, and connection servers, give VMware a call.  They have a few tools that will help in your efforts.  Unfortunately, I can’t provide any more information about the tools at this time, but I can say that for the purposes of diagnosing connectivity issues, they are really nice.

I also stumbled upon an interesting (and apparently little known) issue when you have a system with multiple NICs that is also running the View agent.  For me, this issue arose on the high powered physical workstation with PCoIP host card, using View as the connection broker.  This system had two additional NICs that connected to the iSCSI SAN.  The PCoIP based connections worked, but RDP sessions through View failed, even when standard RDP worked without issue.  Shut off the other NICs, and everything worked fine.  VMware KB article 1026498 addresses this.  The fix is simply adding a registry entry

On the host with the PCoIP card, open regedit and add the following entry:

Add the REG_SZ value:

If you experience issues connecting to one system running the View Agent, but not the other, a common practice is to remove and reinstall the View Agent.  Any time the VMware Tools on the VM are updated, you will also need to reinstall the agent. 

More experimentation and feedback
As promised in part 1 of this series, I wanted to keep you posted of feedback that I was getting from end users, and observations I had along the way. 

The group of users allowed to connect continued to be impressed to the point that using it was a part of their workday.  I found myself not being able to experiment quite the way I had planned, because users were depending on the service almost immediately.  So much for that idea of it being a pilot project.

The experimentation with serving up a physical system with PCoIP using VMware as a connection broker has continued to be an interesting one. There are pretty significant market segments that demands high powered GPU processing. CAD/CAE, visualization, animation, graphic design, etc have all historically relied on client side GPUs. So it is a provocative thought to be able to serve up high powered graphics workstations without it sitting under a desk somewhere.  The elegance of this arrangement is that once a physical system has a PCoIP host card in it, and the View Agent installed, it is really no different than the lower powered VM’s served up by the cluster.  Access is the same, and so is the experience.  Just a heck of a lot more power.  Since it is all host based rendering, you can make the remote system as powerful as your budget allows.  Get ready for anyone who accesses a high powered workstation like this to be spoiled easily.  Before you know it, they will ask if they can have 48GB of RAM on their VM as well. 

Running View from any client (Windows Workstation, Mac, Tablets, Ubuntu, and a Wyse P20 zero client) proved to give basically the same experience.  It was easy for the end users to connect.  Since I have a dual name space (“view.mycompany.com” from the outside, and “view.local.lan” from the inside), the biggest confusion has been for laptop users remembering which address to use.  That, and reminding them to not use the VPN to connect.  A few firewall rules blocking access will help guide them. 

One of my late experiments came after I met all of my other acceptance criteria for the project.  I wanted to see how VMware View worked with linked clones.  Setting up linked clones was pretty easy.  However, I didn’t realize until late in the project that a linked clone arrangement of View really requires you to run a Microsoft KMS licensing server.  Otherwise, your trusty MAK license keys might be fully depleted in no time.  There is a VMware KB Article describing a possible workaround, but it also warns you of the risks.  Accommodating for KMS licensing is not a difficult matter to address (except for extremely small organizations who don’t qualify for KMS licensing), but it was something I didn’t anticipate.

I had the chance to do this entire design and implementation not once, but twice.  No, it wasn’t because everything blew up and I had no backups.  My intention was to build a pilot out at my Primary facility first, then build the same arrangement (as much as possible) at the CoLocation facility.  What made this so fast and easy?  As I did my deployment at my Primary facility, I did all of my step by step design documentation in Microsoft OneNote; my favorite non-technical application.  Step by step deployment of the systems, issues, and other oddball events were all documented the first time around.  It made the second go really quick and easy.  Whether it be my Firewall configuration, or the configurations of the various Connection Servers, the time spent documenting paid off quickly.

Next up, I’ll be going over some basic configuration settings of your connection servers, and maybe a little tuning.