Monday, May 11, 2009

Twitter Limitations

Before I do part 2 of my Twittering from U2 series, I thought I should explain some limitations of this technology.

First, you are limited to 140 characters. Actually, this is not a big deal as you can usually indicate status pretty effectively in a lot less than this, and longer strings tend to be slow to process, anyways.

Next up, there is a limit to the number of tweets you can do in a day or hour. As near as I can tell, these are the current limits:

  • 1,000 total updates per day, for your account.
  • 1,000 total direct messages per day, for your account.
  • 100 API requests per hour, for your account.

Then there are follow limits. Here is twitter's own commentary about it:

http://help.twitter.com/forums/10713/entries/14959

The entry is from last Nov., but I could not find a newer one.

You can request white list status and these limits will disappear. Note that the limits on follows are a bit more complex, and they are quite controversial. I found this link:

http://www.marrubiumwriting.com/?p=296

Interesting stuff. Note that whitelisting is not likely to happen for an automated app. Twitter is a social networking site and they are free, which means they can't afford to take bandwidth from a ton of high-volume commercial apps. Low volume notification of status stuff is probably OK, and especially if you are personally monitoring it, it falls more or less within the confines of a social networking use.

So, to summarize:

If you are thinking about using Twitter to automate something between systems, you need to find something where timeliness and reliability are not an issue (our next blog will address some of that), you need something where you are not talking about high volume. And for now, you probably need to avoid those parts of the API that involve following other Twitter accounts, unless you can keep the volume very low.

Monday, May 4, 2009

Twittering from U2: How and Why? Part 1

This is the first of several posts, where I plan to indicate how and why one would access Twitter from a U2 (Universe or Unidata) system.

For those who don't know what Twitter is, I'll provide two links that will help you to understand it. Wikipedia has a good explanation here: http://en.wikipedia.org/wiki/Twitter And here is Twitter's own page explaining what they are: http://twitter.com/about#about

So, if you've read these links, or if you're already familiar with Twitter, you've realized that this is primarily a social networking application, but that people have found other uses for it.

So, for starters, why would you want to do this?

Several uses that might be of interest to a U2 programmer include:
  • Sending messages from a system and being able to monitor them elsewhere. A powerful and simple publish/subscribe model (with some limitations).
  • Keeping customers or prospects aware of promotions, events and offers.
  • Filtering Twitter Searches programmatically, to provide a short-list of interesting messages.

There's bound to be more, but that's a good starting point.

Next, I'm going to show how to get/put updates from/to Twitter directly from a UniBASIC program on Universe on *nix. The same concept will work on Universe on Windows, Unidata on anything and for that matter, any MultiValue (PICK) platform that lets you run a cmd line application and capture the results.

To get the maximum reusability, I've done much of the UniBASIC code as subroutines, effectively an API that you can call.

I've created a directory named "TWITTER" as a subdirectory in the same directory as your account VOC resides. In actual fact, I created a type 19 file called TWITTER (DIR file type in Unidata), which automatically created the TWITTER directory for me, but we won't need the U2 file pointer at this time. I use this directory as a scratch area, a place for the java classes that I use. On non-U2 systems you might need a MultiValue file called TWITTER and a directory called TWITTER.

The java components that I use are a command line java program called FWTwitterDirect.java and an open source library that I found referenced on the Twitter developer pages, called jtwitter.jar. You can see more about this library at http://www.winterwell.com/software/jtwitter.php. Our java file is listed below:



import java.util.*;
import winterwell.jtwitter.*;

/**
* FusionWare Twitter Direct
* Copyright (c) 2009 FusionWare Corporation
* This code is released as open-source under the LGPL license.
* This code comes with no warranty or support.
* The LGPL license text can be reviewed here:
* http://www.gnu.org/licenses/lgpl.html
*
* To see info about our Twitter Gateway that provides for guaranteed
* delivery of automated tweets, filtering of incoming tweets, and more
* See FusionWare Twitter Gateway (TRILL) at http://www.fusionware.net
* Phone: 1-866-266-2326 or 604-777-4254 or email info@fusionware.net
*
* This class is a command-line utility to interface with Twitter from within
* Legacy LOB systems.
*/
public class FWTwitterDirect
{
final static char LOW_VM = (char)29;
/**
* @param args
* Syntax comes in two forms:
* Get:
*
* java FWTwitterDirect userId password
*

* Put:
*
* java FWTwitterDirect userId password "Update text"
* (Note last parameter should have quotes if it contains spaces.)
*

*/
public static void main(String[] args)
{
try
{
if (args.length == 2)
{
// Get
Twitter twitter = new Twitter(args[0], args[1]);
twitter.updateStatus(args[2]);
}
else if (args.length == 3)
{
// Put
Twitter twitter = new Twitter(args[0], args[1]);
List statuses = twitter.getFriendsTimeline();
Iterator it = statuses.iterator();
while(it.hasNext())
{
Twitter.Status status = (Twitter.Status)it.next();
System.out.println(
status.user.screenName + LOW_VM +
status.user.name + LOW_VM +
status.createdAt.toString() + LOW_VM +
status.getText());
}
}
else
{
// No good
System.out.println("Invalid Syntax:\njava FWTwitterDirect userid password [\"text\"]");
System.exit(2);
}
}
catch (Exception e)
{
e.printStackTrace();
System.exit(1);
}
}
}


The syntax for the command, when run from the directory where your VOC resides, and with no CLASSPATH environment variable set, is as follows:



java -classpath TWITTER:TWITTER/jtwitter.jar FWTwitterDirect userId password ["text"]



Note that if the text contains spaces you'll need quotes around it. If you omit the text, we retrieve the last 20 tweets for the user and the user's friends.

Now for the UniBASIC API code:



SUBROUTINE FWTWEET.DIRECT.API(DIRECTION, USERID, PASSWORD, TEXT)
*
* Author: Robert Houben
* Version: 1.0
*
* FusionWare Twitter Direct
* Copyright (c) 2009 FusionWare Corporation
* This code is released as open-source under the LGPL license.
* This code comes with no warranty or support.
* The LGPL license text can be reviewed here:
* http://www.gnu.org/licenses/lgpl.html
*
* To see info about our Twitter Gateway that provides for guaranteed
* delivery of automated tweets, filtering of incoming tweets, and more
* See FusionWare Twitter Gateway (TRILL) at http://www.fusionware.net
* Phone: 1-866-266-2326 or 604-777-4254 or email info@fusionware.net
*
EQU TRUE TO 1, FALSE TO 0
EQU AM TO CHAR(254), VM TO CHAR(253), SVM TO CHAR(252)
EQU LOW.VM TO CHAR(29)
EQU LF TO CHAR(10)
EQU CR TO CHAR(13)
*
CMD = 'sh -c "'
CMD = CMD : 'java -classpath TWITTER:TWITTER/jtwitter.jar '
CMD = CMD : 'FWTwitterDirect '
CMD = CMD : USERID:' '
CMD = CMD : PASSWORD
IF DIRECTION EQ "PUT" THEN
CMD = CMD : ' '
CMD = CMD : '""':TEXT:'""'
END
CMD = CMD : '"'
EXECUTE CMD CAPTURING TEXT
CONVERT LF TO AM IN TEXT
CONVERT CR TO "" IN TEXT
CONVERT LOW.VM TO VM IN TEXT
LOOP
WHILE TEXT[LEN(TEXT),1] EQ AM DO
TEXT=TEXT[1,LEN(TEXT)-1]
REPEAT
*
RETURN
*
END


Note that DIRECTION is passed in as either "GET" or "PUT".

For PUT, you must provide the value of your tweet in the TEXT variable. Remember to keep it to 140 bytes or we will truncate.

For GET, TEXT will be overwritten with a dynamic array containing up to 20 attributes. Each attribute is a tweet from the user themselves or from their friends, in date/time order, with the first one being the newest. For each line, it will be divided into multivalues where they are laid out as follows:

Multivalue 1 is the Twitter user name of the user that sent the tweet.

Multivalue 2 is the Twitter user's display name.

Multivalue 3 is the datetime of the tweet.

Multivalue 4 is the text of the tweet.

So, here is an example program that uses the API. Note that while this program is interactive, you can call the API from a program running in a phantom.



*
* Author: Robert Houben
* Version: 1.0
*
* FusionWare Twitter Direct
* Copyright (c) 2009 FusionWare Corporation
* This code is released as open-source under the LGPL license.
* This code comes with no warranty or support.
* The LGPL license text can be reviewed here:
* http://www.gnu.org/licenses/lgpl.html
*
* To see info about our Twitter Gateway that provides for guaranteed
* delivery of automated tweets, filtering of incoming tweets, and more
* See FusionWare Twitter Gateway (TRILL) at http://www.fusionware.net
* Phone: 1-866-266-2326 or 604-777-4254 or email info@fusionware.net
*
EQU TRUE TO 1, FALSE TO 0
EQU AM TO CHAR(254)
*
PRINT "Enter user id":
INPUT USERID
IF USERID EQ "" THEN STOP
*
PRINT "Enter password":
ECHO OFF
INPUT PASSWORD
ECHO ON
IF PASSWORD EQ '' THEN STOP
PRINT
*
LOOP
PRINT "Enter update text ('.' to retrieve)":
INPUT TEXT
UNTIL TEXT EQ '' DO
IF TEXT EQ '.' THEN
DIRECTION="GET"
END ELSE
DIRECTION="PUT"
END
CALL API.FWTWEET.DIRECT(DIRECTION, USERID, PASSWORD, TEXT)
IF DIRECTION EQ "GET" THEN
ACNT=DCOUNT(TEXT,AM)
FOR A=1 TO ACNT
LINE=TEXT<A>
IF TRIM(LINE) NE "" THEN
NAME=LINE<1,1>
DISPLAYNAME=LINE<1,2>
TIME=LINE<1,3>
MSG=LINE<1,4>
PRINT "NAME=":NAME
PRINT "DISP=":DISPLAYNAME
PRINT "TIME=":TIME
PRINT "TEXT=":MSG
PRINT
END
NEXT A
END
REPEAT
STOP
*
END



So, here are the pros and the cons:

On the pro side, you don't need any additional infrastructure, you can do everything from UniBASIC. We could extend the class file to provide different types of retrievals, in addition to the GET retrieval.

On the con side, your Universe server has to have Internet access (possible security issues), any temporary failure on Twitter's part will result in an error and a lost communication, and when you pull back tweets, you have to do your own parsing.

In my next post, I'm going to be removing the cons using FusionWare Integration Server with our Twitter Gateway technology preview. The new product is named FusionWare TRILL ™ (Twitter Reliable Intelligent Live Link).



Thursday, April 23, 2009

Open Source Web Store for System i

I've been working with a user on replacing their current web store with one that actually integrates with their System i POS system. Their old one printed an order form off, and they had to pick up the printer output, manually check inventory, grab it, complete the order, then notify the customer. If anything went wrong, they had to notify the customer. It was all asynchronous, which is not what most customers expect from a web transaction.

This is not a new task for us. We've done this for other customers in the past. You typically need to access two things: Data and Logic. Raw data is typically not too hard to get at. Logic is a bit trickier. Our customer was price concious (who isn't, these days), so we looked at ways to reduce costs for them.

One way we did this was to go with a lot of open source software.

osCommerce is an open source web store product that uses MySQL as its database. We've found a way to hook the calls to the DB and, where relevant, pull/push data from/to the POS system through PHP web service calls.

In some cases it's data that we process directly, such as inventory levels for product availability. But in other cases we provide a web services layer that calls directly into RPG programs, allowing for the POS to process order completion, including credit card validation and all that fun stuff.

We have an amazingly simple process for creating a web service that calls an RPG program, using the FusionWare Integration Server Designer.

The customer is installing osCommerce, on top of Apache and PHP, on top of Linux, on commodity hardware. They are using more commodity hardware to run the FusionWare Server on, which enables them to keep the load off the System i (we could run there, but most System i apps are running pretty full load). The end cost is a fraction of what IBM quoted them for a WebSphere and Global Services-based solution.

We've been doing this kind of data/logic integration for web sites since about 1995, when we had two customers start using web servers (One used Netscape, one used IIS) to put up web stores. These were custom jobs. They came up quickly with minimal functionality, then more was added over time. With products like osCommerce, you can quickly and easily bring up a full-featured web store, saving you the effort and pain of gradually creating this presence and getting it right. Now, with FusionWare, you can bring the freedom of open-source osCommerce to your System i POS system.

For anyone who is going, we'll be at the COMMON trade show next week in Reno. Booth number 214.

Tuesday, April 21, 2009

A Little Birdy Told Me

I've just discovered another use for Twitter.

You tweet a question about something you're researching, and almost as fast as Google comes back with an answer to your query, you get an answer from someone in the know.

How does this work?

Well, as near as I understand it, when I put my question in, it got indexed by Twitter.

Now, either the person at the other end did a Twitter search, or perhaps there is some kind of software out there that will notify you if a tag term is used somewhere.

In any case, I'm blown away with how quickly the response comes back!

Thursday, April 16, 2009

To Tweet or Not to Tweet

I've been on Twitter for over a month, now, and I think I'm only now starting to "get" it. I've been lurking, looking, and seldom tweeting at first, but now I'm starting to get the idea of some of what I can do with it and what it can do for me.

Interestingly enough, this is another case of moving from somewhere to nowhere. Originally, all business communications happened by paper or fax, using company letterhead, dates and signatures. Then came email, and communication became a bit more "virtual", but still went out through a corporate email server, usually backed up by information on a corporate portal.

But now, savy companies have figured out that Facebook, Twitter, YouTube and a host of other social networking sites can drive significant interest and ultimately business to them. The interesting thing is that now your corporate communication is no longer controlled by a corporate server. While you can block a Twitter account from following you, and you can refuse to accept a Facebook invite, when dealing with the volumes that one hopes to get from these sites, there is simply no way that you can do this practically.

Setting up these sites and getting started using them can be done very quickly, and with minimal cost, yet with huge benefits, so many companies are jumping on the bandwagon. In some cases, they jump on it because their competitors are there and they have to, in order to survive. But then, there are risks involved in using these things.

Where the company originally controlled all communication, and who we did it with, we now find that our communications are controlled by a series of other sites, and that our customers consist of anyone who can find us by any means and chooses to subscribe to our updates. We've lost control of who we do business with, we've lost control of our marketing medium, but we've gained infinitely more customers as a result. No longer are our customers people that we approach. They are people who approach us!

Most IT people still don't get it. Kids are getting it. They don't really think about it, they just use it, and the common uses become obvious. Business people really hate to not have control, so this type of free-wheeling business-by-experimentation model really scares them, or simply pisses them off. Many won't embrace it - to their disadvantage, I may add. Those who do embrace it will find that it will open new opportunities to them. I can't really say what these are, nor can I predict which ones will work and which ones will turn out to be insecure or unsafe, but for those willing to experiment, the opportunities await.

I suspect that the push to use these technologies is as likely to come from, or be blocked by, management and IT people almost equally at first. Misinformation, concern over the risk of the unknown, a desire to micromanage everything to fine detail, and people being protective of their job security, are all factors that can inhibit the move to use social networking.

So, my questions are:
  • Are you willing to yield some control, in favor of reach and savings?
  • Are you willing to experiment with new marketing opportunities?
  • Are you stuck in the past, or ready to embrace the future?
As for me, I've begun to tweet, and you can't stop me now!

From Firmware to Nowhere...

When I first started with computers, I was working on Microdata systems running an O/S called Reality that had 16 users running on 64K of core memory. This worked because the system used a virtual memory model and a virtual machine model, but also because most of the operating system was burned into a memory chip, called a firmware chip. This chip had its contents burned in at the factory and there was no way to change its contents once it was produced. To upgrade firmware meant that a technician had to show up, turn off the computer, open its refrigerator-sized case, and replace the physical chip. I think they actually had to use a soddering gun to do this.

Not too far into my experience, there was a big change. They came out with a new technology that the technicians labeled "Mushware". It was really the same thing, but they could update the contents of the chip without having to create a new chip. This was similar to flash ROM, probably the precursor to it, or a variant of it. You might think of this as having a virtual O/S. That's how it felt at the time.

Over time, more and more of the operating system moved out of these specialized chips and became part of regular RAM, that had to be bootstrapped.

When IBM PCs came out, with MS DOS, computer systems for a time seemed to move away from virtual memory and virtual code, but with the advent of Windows NT and Java, virtualization started a comeback. Recently products like VMWare, Xen and Microsoft Virtual PC/Virtual Server have further provided options for virtualization. And now we have Cloud Computing.

We can see that some of the trends that have recently been taking IT by storm involve taking a machine that used to require a physical host, and moving it into a virtual environment, where you could actually move it around, clone it, save a snapshot, and do lots of powerful things. Of course, it became possible to wind up with so many systems that managing them, finding them, or even knowing they existed became next to impossible. There are always trade-offs, but in general, virtualization has been a good thing.

Along the same lines, we have virtualization of applications.

At first, an application had to live on an O/S and that meant hardware. In fact, the cost of hardware was so big a component of an application, that in the early days, often customers would buy hardware first, then find someone to write the application for it. I remember forward-thinking salespeople trying to convince customers to think about the application first, then work backward to the best system to host it on. At the time, this was novel thinking!

But now, with cloud computing, your application can live anywhere. In fact, it may be distributed across multiple systems in data centers around the globe. These systems probably implement virtual machines that provide a slice of your functionality, and they use multi-tenant applications that allow multiple customers to share a virtual machine instance safely and securely. You really don't know, and probably don't care for the most part, what hardware this resides on. Your focus is the application: Its functionality, availability, performance, reach, and ultimately its value to you.

The other nice thing is, you don't have to provision a system, or possibly, even a data center, in order to bring up an application. This has a huge cost savings and can speed deployment dramatically.

Of course, your data and applications may also disappear, if the vendor goes out of business. And if they do, you don't really have any recourse. The problem is, there are real risks with a new technology like this. One way to mitigate these risks is to stay with larger vendors.

As one example of how risks can impact adoption of new technology: Canadian government agencies cannot put private data about Canadian citizens into the cloud, because if these wind up in computers in foreign jurisdictions, these foreign entities (notably the US) may sieze the data based on laws that violate Canadian privacy laws. So, no cloud computing, for now, for Canadian government agencies.

So, there are lots of potential benefits to virtualization and cloud computing, but there are also risks. The benefits will belong to those willing to take thoughtful risks. I believe that many companies are unaware of the costs they could be saving. Others are not realizing the benefits they should be because they don't have proper control (or are exercising too tight control at the wrong places) over their virtualization initiatives.

So, what is your company doing with virtualization, cloud computing and SaaS? Your answer may range from "Nothing" or "Watching and Waiting" to "Trailblazing".

Monday, April 6, 2009

IBM Optical Data Conversion (EBCDIC)

Here at FusionWare we love a good challenge involving disparate systems, data and business logic. We've been doing this for so long that very few things can stump us.

Recently we had a customer with a large amount of data on an AS/400 on optical drives. They had an application on the AS/400 that would let them read and process this data so that they could view historical information. They had migrated their application to another system, but compliance regulations (and collections) required them to retain access to their historical data.

Unfortunately, this meant that they kept paying maintenance on the AS/400, and worse, they were facing a situation where their hardware was old enough that it was going to drop off IBM maintenance, so they were facing a hardware upgrade.

The customer visited trade shows like COMMON and contacted all sorts of companies, but everyone they spoke to said "No, we can't migrate this data off - you're stuck with the AS/400". Their own, incredibly creative efforts were gradually getting them there, but they simply didn't have the time to do all the conversions themselves, and really needed tooling to make it efficient.

Finally, the customer found us, we held a discussion of what they were trying to do, and we provided them with a proposal and estimate to do the following:

  • Conversion of their historical data to a SQL Database.
  • A web-based application to access this data, providing at least the same functionality as the current AS/400-based lookup program.
  • Good performance when accessing the database.

In consultation with the customer, we decided to use SQL Server for the converted data (any SQL Database that could handle the volume would have done) and IIS with ASP.NET to rapidly create the web GUI for accessing the new data store. Again, other web options could have been used. We work with our customers to find the solution that will give the best results and meet your corporate standards.

In doing the work, there were a number of interesting challenges that we had to work through:

  • The optical data is in EBCDIC format. We needed special tools to provide the conversion from EBCDIC to ASCII, including handling Packed Decimal and other special formats.

  • The optical data was huge. Over 60 GB of raw data. We needed a target database that could handle the volume, and indexing was critical, to ensure reasonable performance for the resulting application.
  • The optical files consisted of 3 parts: Header, metadata and data. Over time, the format of the data written changed, so that there were 6 variations of metadata for one file type.
  • Occasionally, garbage files were written to the optical drive and/or garbage data was written to some of the files. The only way to know this was to process the file and detect the problem when processing the converted data.
  • Because the data set was so large, it turned out that attempts to anticipate data problems by sampling data were largely unsuccessful. You really had to go for it and deal with anomalies as you encountered them. A good restart approach was critical.
  • Some critical data was embedded in the data in formats that required complex handling to extract it reliably. Basically it was in free-format text fields whose placement changed over time. A complex algorithm had to be devised to figure out how to get this data out reliably.
  • There were several collections of data. We started with one of the better defined, but larger sets. One objective was to come up reusable components and code that would make subsequent collections easier to work with.
  • Security and privacy. The customer's data included data with privacy concerns, so we transferred it between the customer's office and ours using Maxtor Black Armor secure USB drives (http://www.maxtor.com/en/hard-drive-backup/external-drives/maxtor-blackarmor.html). We did our development and testing work locally with all the data (including the SQL Server database) on our own Black Armor drives, ensuring maximum security and protection of the customer's data.

The solution involved a number of tools and steps:

First, we used a product called VEDIT from Greenview Data, Inc. (http://www.vedit.com/) including their Level 2 EBCDIC conversion tools to facilitate the conversion. This product allows you to inspect the data and view it in both ASCII-converted and Hex mode, on the same screen (split window). It also supports a macro mode so you can automate operations from a command line. VEDIT uses something called a layout file to do it's EBCDIC conversion.

We used the FusionWare Integration Server (our own product) both to orchestrate the steps, transfer the resulting ASCII-delimited files, run SQL DML Scripts, and create layout files and SQL DDL Scripts.

Because formats changed over time, we had to do the conversion in several steps:

The first step was a preprocess phase. We started by breaking the EBCDIC files up into header, metadata and data portions. Then we processed the metadata, and used XSLT to create layout and SQL DDL scripts. We had to associate each converted file with the appropriate layout files. When this pass was done, we had numerous variations of both the SQL DDL scripts and the layout files.

We used these initial steps to create the SQL Server tables and to build the application for viewing the data. This application was an ASP.NET application and used a browser to access the data, using windows authentication and role-based access.

Then we started the process of conversion. The conversion process had to detect which variation of the layout file to use to create the ASCII comma-delimited data. Because of some data issues, we also had to put a process in to do a data-cleansing step on some of the files. Then we transfered the data into the SQL Server tables.

Actually, the above 3 steps were iterative. As we got to the data phase we discovered data issues that required us to restart the final phase, and in some cases redo parts of the preprocessing phase, as well. Some of this required changes to the SQL Tables as we discovered that at one point the application added new fields to the database.

The application that we built for accessing their data provided them with greater searchability than their original AS/400 application, and performance was not only better than accessing the AS/400 Optical data, but it was actually faster than accessing historical data on their new system.

The end result of this process was a set of reusable code components that can be applied to the additional collections.

The customer now has their initial collection (statements) sitting in a SQL Server database (about half a billion rows worth, taking up about 130 GB of data, index and tempdb space) and a process and components that they can repurpose to convert their other collections of historical data. Once we complete the conversion of their other collections, they will be able to decommission the old AS/400 system with its optical drives, while still meeting their legal obligations.

Monday, February 2, 2009

Revelation G? You have what???

FusionWare Corporation has products and people with a long history of supporting connectivity for all sorts of unusual platforms and databases. In addition to other solutions, we have a suite of middleware products for legacy, non-relational database systems that are referenced as "MultiValue". We have ODBC, JDBC, OLE DB, ADO.NET and other types of drivers for these systems. We let you connect these systems to just about anything.

Furthermore, these systems come in many different flavors, which run on many different hardware/OS combinations. We can support just about any flavor of these databases. They are a lot like the old Business Basic systems, where it seems lots of vendors licensed (or stole) their own version of the software. Most of these systems have modern, state-of-the-art variations, with rich connectivity options. Occasionally, though, we run into a dinosaur.

Recently, we had a call from a company. They were switching small doctor's offices from a competitor's old DOS-based package to their new Microsoft package that used SQL Server or MS Access and Win Forms to provide an application for these clinics and doctors. The old application did not include source code, and the original vendor appeared to have gone out of business (or was uncooperative with the customer trying to move away from them). The real problem was that the version of the database that they were using was one that was over 20 years old!

The database was a MultiValue variation called Revelation. The DB vendor got up to version G.2 before they made their first major, compatibility-breaking change to something they called "Advanced Revelation", sometimes called ARev. They've moved on to other names since then. A lot of users had apps that worked fine with Revelation version G, but needed a rewrite to work with ARev or later. The biggest change was the Rev G assumed a green-screen GUI, based on the DOS command console, while the newer versions assumed Windows as the GUI.

Now, most versions of MultiValue databases were designed to run on mini-computers (remember those?) and as such were multi-user systems. Revelation was designed to run on DOS-based PCs. They could use a shared Novell-based file-system, but the selling point was low-cost personal computers. Also, they tended to be really easy to run and maintain on a single user PC basis, which is how most small clinics and doctors offices used this particular app. Because of the low cost, they lent themselves to inexpensive applications. There just wasn't the money in upgrades. There also weren't any particular reasons to upgrade these users.

In short, it's not that unusual for someone to have an old app that they've had for a long time, that still runs on this system. What is unusual is for a user or vendor who wants to move them off to figure out that it's a MultiValue system they are working with. Even more rare is for them to actually find someone who knows what it is, let alone can work with it!

Anyways, we got this call. A vendor who wanted to move a customer had run into this system and wanted to know how to get the data off. Could we help them?

Well, this variation is different enough from the other multi-user versions, and rare enough that we weren't going to create a full-blown ODBC or OLE DB driver for it, but we did come up with a very cost-effective way for a user to create an ASCII delimited file of their data.

I actually was able to find an old copy of the 5 1/4" floppy-based Revelation Developer system. (I have both a developer and run-time license of Revelation G.2 - boy am I a dinosaur!)

The upshot was a Revelation-BASIC program, along with instructions to import it into your application and run it, that read a local O/S file and used it to read in and process the data that was of interest to you.

We had to work through some interesting limitations. It turns out that although the application can access the windows file system, it is limited to about 32K for a single block of data. So we had to create blocks of rows and concatenate them later into a single delimited file.

All this was done in a clean, reusable package. This vendor can now resuse this package to convert data from any of their competitor's systems to their application. The same package could be used to copy data to SQL Server, Microsoft Excel, or any number of other sources.

Another satisfied customer. Another interesting challenge met!

Friday, January 30, 2009

The Virtual Handbrake

This story is dedicated to all the developers who have found themselves in a situation where they have set something in motion that they desperately wish they could stop...

When I was a kid, I used to think that computers were good at arithmetic and logic. It turns out that computers are horrible at numbers. They can't round, don't understand that 1.9999999999999 is effectively equal to 2, can't handle floating point without a human holding their hand. And LOGIC? Not on your life. They are exceptional at doing what you tell them - and not doing anything you forget to tell them. They seem to have a secret, perverse delight in doing the most illogical thing in response to your instructions. If you gave half the detail to a human they'd feel that you were patronizing them, but a computer will get it wrong every time.

The one thing they have working for them, is that if you can give them instructions that are clear and complete enough, they will do the most mundane things at absolutely inhuman speed. Things that would cause a human to die of boredom just thinking of doing, they seem to delight in eating up and spitting out in almost no time at all.

Every once in a while their greatest weakness combines with their greatest strength. Then things go really bad! There is nothing like the feeling you get when you've typed in the name of your latest program to fix the database, and after a moment of trepidation you hit the Enter key. Only to see a stream of error messages scrolling up the screen.

You frantically pound the Break key but nothing happens, finally you turn the terminal's power off and on in hopes of resetting your session and stopping the nearly-as-fast-as-light carnage! It's a moment like that that you really wish your computer had a virtual handbrake. Something that you could run to an pull (or push) and make it all stop!

I knew someone who used to tell a story about an airline reservation software system that they had. A pilot was working on a terminal, using a word processor program. With this program, you would hit a function key to save your document, and a prompt would come up saying "OK to save (Y/N)?" and you would hit the letter 'Y'. Your document would save and the message "Saved" would show on the status bar. If you happened to hit the wrong function key, it would prompt you with "OK to delete (Y/N)?" and if you happened to hit the letter 'Y', your document vanished without a trace, the status bar said "Deleted", and you got to repeat the last hour's work.

Well, one day a pilot managed to hit the wrong key, and the very moment that he hit the letter 'Y', he realized what he had done. Airline pilots have great reflexes. This guy was no exception. Without missing a beat, his hands flew from the keyboard to the cable on the back of his terminal, that connected it to the server. He yanked the cable off. As he was doing this, the man of might was rising up from his seat and attracted the attention of the entire office as he sprinted madly to the computer room. In there, he dove to the back of the computer to catch the other end of his cable and yank it off the server before his instructions could reach it.

Somehow the fact that the word "Deleted" was on his status bar didn't tip him to the fact that he hadn't been fast enough. I give him an 'A' for effort, though!

Oh! For a virtual Handbrake!

The Computer Mouse and the Specimen Agitator

This is an example of how a customer thought outside the box to solve a problem.

We sell an ODBC driver for legacy Multivalued databases, and we had a customer who was doing HIV testing in the Seattle area, about the mid-1990s, when a different company was selling these products (at that time it was Liberty Integration Software. My current company, FusionWare, now sells these.)

The customer was using MS Query (came with Excel) to download some information into a spreadsheet. The report ran through a huge database file on their multivalue system, that took a long time to query (several hours). But for some reason, the report wouldn't complete. The customer cancelled it and restarted it several times, then they noticed something strange.

MS Query used to display the ODBC globe, which looked like a globe of the world, in the top right corner, and much like IE does with the stylized "e" it displays, when MS Query was downloading data, the world would turn.

Well, the report would run for a few minutes, then the world would stop turning. The customer noticed that if he touched any keys on the keyboard or moved the mouse, that the world would turn again for a few minutes. Somehow, the MS Query applet was getting stuck until something hit the message queue for the applet.

It turned out that this was a known bug with certain combinations of MS Query and associated ODBC and Jet components.

What to do now? The customer was a busy man and didn't have the time to sit there moving the mouse while the report finished - this would take hours!

Then he had a brainstorm! He took a specimen agitator. This is a bit like a miniature version of the things that shake up the paint cans at your local hardware store, except that it was intended to shake up blood samples, or other samples, possibly mixed with other chemicals in a test-tube.

Well, he put the mouse in the specimen agitator, turned it on, and left it running until the report completed.

The customer thought he should send Bill Gates one of these specimen agitators.

I never cease to be amazed at the ingenuity of some of my customers!