From Mainframe to Virtual Desktop – Good to Better

As with most technology advances, the newest, hottest “thing” grew out of an earlier model.  The latest buzcept, Cloud Computing, leverages the incredible advances in computer processor, disk storage, and Ethernet networks to extend a model that has existed nearly from the dawn of the computer age.  So as IT leaders present ROI and justifications for moving to the latest technology, we should present the change as an inevitable progression from earlier computing models.

40 years ago, mainframes established a resource model that provided the first commercially viable computers.  These early computers were behemoths in physical size and cost, so organizations purchased one. Large companies jumped at the opportunity to reap the rewards of fast, accurate number crunching.  A room sized computer replaced armies of number crunching people.  Interaction with The Computer happened at a “terminal” that could be located at some distance from the computer itself.  The newly minted System Administrators never had far to go to keep an eye on The Computer.

The dawn of the Personal Computer 30 years ago shook up the computing model.  At first, individuals purchased a PC and brought the device to the office.  Companies still bought large Mainframe computers, and did not know quite what to make of the PCs that began popping up in various offices.  Users loved the PC because they could customize and control it.  As more and more data migrated to the five and a quarter inch floppy drives littering the offices, organizations realized that someone needed to manage the multiple computers.  The next ten years saw rapid advancement in Local Area Network protocols as System Administrators struggled to manage a computing environment rather than The Computer.

Companies still relied on Mainframe computers in the 1980s, and continue to use them to this day.  For some large database intensive functions (billing tens or hundreds of thousands of customers each month), Mainframes are the way to go.  For some users even today, the PC is a platform for doing a little email and a lot of accessing a Mainframe with a client program.

While PCs dominated the computing landscape in organizations, for the 1990s and beyond, System Administrators continued to look longingly on the simpler strategy of managing one, centralized computing resource.  As Local Area Network technology progressed, developers tried to put the genie back in the bottle.  Enter the Thin Client.  Users on the network had a PC with very minimal hardware.  Once turned on, this device tried to act like a PC, but all the programs would run on a large server in a central location.  This centralization of computing resources promised to be easier to manage, and cheaper to run.  Users hated it.  They lost control of their computer environment.  They could no longer add programs they found in the bookstore, or bring their own desktop printer to the office.  Despite the advantages for System Administrators, this Thin Client model did not gain wide acceptance.

In some situations, Thin Clients provide a great alternative to costly PC upgrades.  A few years ago, I deployed and ran this model for a small private school (open-source LTSP).  I converted the existing PCs to thin clients, and spec’ed out a beefed up server.  PCs that had been maddeningly slow suddenly ran faster and more reliably than before.  As the school was able to replace the computer lab PCs, I converted them to a hybrid model, using Active Directory and NFS file sharing.  Users could still have their personalized PC settings follow them from computer to computer, but now most processing happens at the PC instead of at the server.

In the past year or so, a new model has provided a possibly viable path for a return of Mainframe 21st century style.  Virtual Desktop Infrastructure, or VDI, provides a hardware model similar to traditional Thin Clients, but allows for the possibility of  far more user control.  Where the Thin Client model typically operated as one computer platform, the VDI operates as several virtual computers.  Without diving into some dense technical discussions, suppose you had an office with five Thin Clients.  If one of those users opened a program that went rogue, eating up memory and CPU, all users would suffer.  Because it is a truly shared environment, Administrators put serious restrictions on what an individual could do.

VDI, on the other hand, creates multiple virtual PCs on the server.  If  a user runs a program that crashes and dies, the other users on the server do not notice.  The virtual PC is “sand boxed” so that crashing it does not consume resources devoted to other users.  Because all this virtual reality happens in one location, administration becomes easier by leaps and bounds.  But perhaps the true genius in VDI lies in the foresight developers like VMWare and Citrix have put into mobile computing.  Not only are users able to log into any “PC” (think Thin Client hardware here), and have all their programs and files and desktop backgrounds wherever they happen to be, but users can also log in to their “PC” remotely from a home PC and have it act exactly as it would in the office.  Goodbye emailing sensitive files, burning data to CDs, thumb drives, and lost laptop nightmares.  Amazingly, users can even log in from their iPads – yes, there is an App for that.

So how to explain this to the other C-level decision makers?  It’s the best of both worlds.  The security and cost savings of a Mainframe, with the user customization of the PC.