Now that VMware View is up and running, you might be curious to know how it is working. Well, you’re in luck, because this post is about how View worked, and what was learned from this pilot project. But first, here is a quick recap of what has been covered so far.
VDI for me. Part 1 – Why the interest
VDI for me. Part 2 – The Plan
VDI for me. Part 3 – Firewall considerations
VDI for me. Part 4 – Connection Servers and tuning
I was given the opportunity to try VMware View for a few different reasons (found here). I wasn’t entirely sure what to expect, but was determined to get a good feel for what VDI in 2012 could do. Hopefully this series has helped you gain an understanding as well.
The user experience
Once things were operational, the ease and ubiquity of access to the systems was impressive. One of our most frequent users often stated that he simply forgot where the work was actually being performed. Comments like that are a good indicator of success. From a remote interaction standpoint, the improvements most often showed up where it was really needed; remote display over highly latent connections, with convenience of access. Being able to access a remote system from behind one corporate network to another was as productive as it was cool.
It was interesting to observe how some interpreted the technology. Some embraced it for what it was (an appliance to be more productive), while others chose to be more suspicious. You may have users who complain about their existing computers, but are apprehensive at the notion of it being taken away for something that isn’t tangible. Don’t underestimate this emotional connection between user and computer. It’s a weird, but very real aspect of a deployment like this.
Virtualization Administrators know that good performance is often a result of a collection of components (storage, network, CPU, hypervisor) working well together through a good design. Those of us who have virtualized our infrastructures are accustomed to this. Users are not. As VMs become more exposed to the end users (whether they be for VDI, or other user-facing needs), your technical users may become overly curious by what’s “under the hood” with their VM. This can be a problem. Comparisons between their physical machine and the VM are inevitable, and they may interpret a VM with half the processors and RAM as their physical machine to provide only half of the experience. You might even be able to demonstrate that the VM is indeed better performing in many ways, yet the response might be that they still don’t have enough RAM, CPU, etc. The end user knows nothing about hypervisors or IOPS, but they will pay attention to some of the common specifications general consumers of technology have been taught to care about; RAM and CPUs.
So in other words, there will be aspects of a project like this that have everything to do with virtualization, yet nothing to do with virtualization. It can be as much of a people issue as it is a technical issue.
The PCoIP protocol is very nice, and really shines in certain situations. I love the fact that it is a tunable, non-connection oriented protocol that leaves all of the rendering up to the host. It just makes so much sense for remote displays. But it can have characteristics that make it feel different to the end user. The old “window shake” test might redraw itself slightly different than in a native display, or using something like RDP. This is something that the user may or may not notice.
The pilot program included the trial of a PCoIP based Zero Client. The Wyse P20 didn’t disappoint. Whether it was connecting to a VM brokered by View, or a physical workstation with a PCoIP host card brokered by View, the experience was clean and easy. Hook up a couple of monitors to it, and turn it on. It finds the connection server, so all you need to do is enter your credentials, and you are in. The zero client was limited to just PCoIP, so if you need flexibility in that department, perhaps a thin client might be more appropriate for you. I wanted to see what no hassle was really like.
As far as feedback, the top three questions I usually received from users went something like this:
“Does it run on Linux?”
“How come it doesn’t run on Linux?”
“When is it going to run on Linux?”
And they weren’t just talking about the View Client (which as of this date will run on Ubuntu 11.04), but more importantly, the View Agent. There are entire infrastructures out there that use frameworks and solutions that run on nothing but Linux. This is true especially in arenas like Software Development, CAE and Scientific communities. Even many of VMware’s offerings are built off of frameworks that have little to do with Windows. The impression that the supported platforms of View gave to our end users was that VMware’s family of solutions were just Windows based. Most of you reading this know that simply isn’t true. I hope VMware takes a look at getting View agents and clients out for Linux.
Serving up physical systems using View as the connection broker is an interesting tangent to the whole VDI experience. But of course, this is a one user to one workstation arrangement – its just that the workstation isn’t stuffed under a desk somewhere. I suspect that VMware and its competitors are going to have to tackle the problem of how to harness GPU power through the hypervisor so that all but the most demanding of high end systems can be virtualized. Will it happen with specialized video cards likely to come from the VMware/NVIDIA partnership announced in October of 2011? Will it happen with some sort of SR-IOV? The need for GPU performance is there. How it will be a achieved, I don’t know. In the short term, if you need big time GPU power, a physical workstation with a PCoIP host card will work fine.
The performance and wow factor of running a View VM on a tablet is high as well. If you want to impress anyone, just show this setup on a tablet. Two or three taps on the tablet and you are in. But we all know that User Experience (UX) designs for desktop applications were meant for a large screen, mouse, and a keyboard. It will be interesting to see how the evolution of these technologies continue, so that UX can hit mobile devices in a more appropriate way. Variations of application virtualization is perhaps the next step. Again, another exciting unknown.
Also a worthwhile note is competition, not only in classically defined VDI solutions, but access to systems. A compelling aspect of using View is that it pairs a solution for remote display, and brokering secure remote access into one package. But other competing solutions do not necessarily have to take that approach. Microsoft’s “Direct Access” allows for secure RDP sessions to occur without a traditional VPN. I have not had an opportunity yet to try their Unified Access Gateway (UAG) solution, but it gets rave reviews from those who implement it, and use it. Remote Desktop Session Host (RDSH) in Windows Server 8 promises big things (if you only use Windows of course).
Among the other challenges is how to implement such technologies in a way that is cost effective. Up front costs associated with going beyond a pilot phase might be a bit tough to swallow, as technical challenges such as storage I/O deserve attention. I suspect with the new wave of SSD and SSD hybrid SAN arrays out there, that it might make the technical and financial challenges more palatable. I wish that I had the opportunity to demonstrate how well these systems would work on an SSD or hybrid array, but the word “pilot” typically means “keep the costs down.” So no SSD array until we move forward with a larger deployment.
There seems to be a rush by many to take a position on whether VDI is the wave of the future, or a bust that will never happen. I don’t think its necessary to think that way. It is what it is; a technology that might benefit you or the business you work for, or it might not. What I do know is that it is rewarding and fun to plan and deploy innovative solutions that help end users, while addressing classic challenges within IT. This was one of those projects.
Those who have done these types of implementations will tell you that successful VDI implementations always pay acute attention to the infrastructure, especially storage. (Reading about failed implementations seems to confirm this). I believe it. I was almost happy that my licensing forced me to keep this deployment small, as I could focus on the product rather than some of the implications with storage I/O that would inevitably come up with a larger deployment. Economies of scale makes VDI intriguing in deployment and operation. However, it appears to be that scaling is the tricky part.
What might also need a timely update is Windows licensing. There is usually little time left in the day to understand the nuances of EULAs in general – especially Windows licensing. VDI adds an extra twist to this. A few links at the end of this post will help explain why.
None of these items above discount the power and potential of VDI. While my deployments were very small, I did get a taste of its ability to consolidate corporate assets back to the data center. The idea of provisioning, maintaining, and protecting end user systems seems possible again, and in certain environments could have a profound improvement. It is easy to envision smaller branch office greatly reducing, or eliminating servers at their location. AD designs simplify. Assets simplify, as does access control – all with providing a more flexible work environment. Not a bad combination.
Thanks for reading.
Two links on Windows 7 SPLA and VDI
RDSH in Windows Server 2008
VDI has little to do with the Desktop
Scott Lowe’s interesting post on SR-IOV
Improving density of VMs per host with Teradici’s PCoIP Offload card for VMware View