Thursday, April 16, 2009

From Firmware to Nowhere...

When I first started with computers, I was working on Microdata systems running an O/S called Reality that had 16 users running on 64K of core memory. This worked because the system used a virtual memory model and a virtual machine model, but also because most of the operating system was burned into a memory chip, called a firmware chip. This chip had its contents burned in at the factory and there was no way to change its contents once it was produced. To upgrade firmware meant that a technician had to show up, turn off the computer, open its refrigerator-sized case, and replace the physical chip. I think they actually had to use a soddering gun to do this.

Not too far into my experience, there was a big change. They came out with a new technology that the technicians labeled "Mushware". It was really the same thing, but they could update the contents of the chip without having to create a new chip. This was similar to flash ROM, probably the precursor to it, or a variant of it. You might think of this as having a virtual O/S. That's how it felt at the time.

Over time, more and more of the operating system moved out of these specialized chips and became part of regular RAM, that had to be bootstrapped.

When IBM PCs came out, with MS DOS, computer systems for a time seemed to move away from virtual memory and virtual code, but with the advent of Windows NT and Java, virtualization started a comeback. Recently products like VMWare, Xen and Microsoft Virtual PC/Virtual Server have further provided options for virtualization. And now we have Cloud Computing.

We can see that some of the trends that have recently been taking IT by storm involve taking a machine that used to require a physical host, and moving it into a virtual environment, where you could actually move it around, clone it, save a snapshot, and do lots of powerful things. Of course, it became possible to wind up with so many systems that managing them, finding them, or even knowing they existed became next to impossible. There are always trade-offs, but in general, virtualization has been a good thing.

Along the same lines, we have virtualization of applications.

At first, an application had to live on an O/S and that meant hardware. In fact, the cost of hardware was so big a component of an application, that in the early days, often customers would buy hardware first, then find someone to write the application for it. I remember forward-thinking salespeople trying to convince customers to think about the application first, then work backward to the best system to host it on. At the time, this was novel thinking!

But now, with cloud computing, your application can live anywhere. In fact, it may be distributed across multiple systems in data centers around the globe. These systems probably implement virtual machines that provide a slice of your functionality, and they use multi-tenant applications that allow multiple customers to share a virtual machine instance safely and securely. You really don't know, and probably don't care for the most part, what hardware this resides on. Your focus is the application: Its functionality, availability, performance, reach, and ultimately its value to you.

The other nice thing is, you don't have to provision a system, or possibly, even a data center, in order to bring up an application. This has a huge cost savings and can speed deployment dramatically.

Of course, your data and applications may also disappear, if the vendor goes out of business. And if they do, you don't really have any recourse. The problem is, there are real risks with a new technology like this. One way to mitigate these risks is to stay with larger vendors.

As one example of how risks can impact adoption of new technology: Canadian government agencies cannot put private data about Canadian citizens into the cloud, because if these wind up in computers in foreign jurisdictions, these foreign entities (notably the US) may sieze the data based on laws that violate Canadian privacy laws. So, no cloud computing, for now, for Canadian government agencies.

So, there are lots of potential benefits to virtualization and cloud computing, but there are also risks. The benefits will belong to those willing to take thoughtful risks. I believe that many companies are unaware of the costs they could be saving. Others are not realizing the benefits they should be because they don't have proper control (or are exercising too tight control at the wrong places) over their virtualization initiatives.

So, what is your company doing with virtualization, cloud computing and SaaS? Your answer may range from "Nothing" or "Watching and Waiting" to "Trailblazing".

No comments:

Post a Comment