Old guard, new beat: TP monitors avert gridlock by policing client/server traffic
It’s an IT manager’s biggest nightmare. On a dark and stormy night, a WAN link suddenly crashed, taking down the national client/server reservation system for a company that wishes to remain anonymous. Harried travel agents kept pounding transactions into the system, expecting it to come up. When it did, the system was immediately slammed by a barrage of pent-up traffic. Designed to handle hundreds of transactions per second, the system choked on thousands and it crashed once again.
This company’s harrowing experience illustrates a dirty little secret about client/server applications. When they’re spun across the enterprise, many can’t handle the high-transaction traffic or unpredictability of com-plex, production-level environments. Increasingly, some companies like the one above are turning to a tried-and-true mainframe approach to keep their new client/server systems afloat: transaction-processing monitors.
Like an old-time traffic cop, a TP monitor ushers client requests to data services in a secure, reliable, and high-performance manner. If the reservation system had been outfitted with one, the gridlock never would have happened. These monitors “give you mechanisms so you’re not at the mercy of the [client/server] environment, you’re more in control of it,” says Rich Finkelstein, president of Performance Computing Inc., a consultancy in Chicago.
Although they’ve been in mainframe systems for decades, TP monitors are just beginning to show up on the client/server radar screen. Until recently, most client/server applications have been too small to benefit from TP monitors, or users didn’t realize their value as a replacement for buying immature middleware or constructing custom systems in a piecemeal fashion.
TP monitors such as IBM CICS (Customer Information Control System) Open and Novell Inc.’s Tuxedo have the bandwidth to handle functions such as load balancing, security, dynamic routing, and access to multiple databases, to name a few. Many IS shops believe that running a high-volume client/server application without a TP monitor is akin to crossing a highway blindfolded.
For applications that handle millions of transactions per day, hundreds of users, and multiple, interconnected servers, “you’d be crazy not to use one of the monitors,” says Michael Prince, director of IS at Burlington Coat Factory Warehouse Inc., in Burlington, N.J.
Prince should know. Three years ago, he set out to move the clothing company’s inventory system from an aging mainframe to a client/server environment. At the time, it was doubtful whether a single relational database could handle the 150G bytes’ worth of data needed to track more than a million items carried in 200 stores.
To balance the load and assure a good response time, he and his crew split the application into 17 Oracle Corp. databases on four servers from Sequent Computer Systems Inc. Each database corresponded to a Burlington merchandise division, such as ladies’ coats or men’s outerware.
Despite the sound design, the system jammed whenever a single Oracle database fizzled.
In order to keep things moving, Prince installed a Novell Tuxedo TP monitor running on Unix. The monitor splits a single sales transaction–for example, the purchase of a suitcase and a bathrobe–and sends separate messages to the respective back-end databases. If one database is down, Tuxedo queues the message and delivers it once the server comes back up or reroutes the message to an available server. Meanwhile, the other database gets updated without delay.
“So [the system] is resilient and has high availability,” Prince says.
Another popular way to use TP monitors in C/S environments is to glue together “three-tiered” architectures in which a mainframe doles out data, a server runs application services, and the client makes things easy for the end user. Seventeen percent of mission- critical client/server applications are three-tiered, and it’s a growing trend, according to Standish Group International Inc., a market-research firm in Dennis, Mass.
Unum Life Insurance Co. of America is one company moving in this direction. The Portland, Maine, firm built a document-management system that employs IBM CICS for OS/2 to bridge a 3090 mainframe, a client/server system, and mainframe dumb terminals. People sitting at the dumb terminals can generate requests through the mainframe to the C/S system, and CICS routes the transactions between the tiers and lets terminal users access LAN resources that were otherwise unavailable, according to Bill Cook, a senior programmer analyst at Unum.
“What it does is extend the ability for our users to take advantage of client/server applications through their hardware infrastructure,” Cook says.
If not TP, what?
All this complexity has scared off some IT folks from using TP monitors when an application has a single database, simple transactions, or few clients.
“A TP monitor is going to add complexity. If you can run with the capabilities of the database and the hardware, and satisfy your volume requirements, that’s what you ought to do first,” says Daniel Amedro, vice president of MIS for Regency Systems Solutions, Hyatt Corp.’s technology division, in Oak Brook Terrace, Ill.
For example, Hyatt is upgrading its client/server reservation system to take advantage of Informix Software Inc.’s multithreaded database engines, which can meet the volume demands of roughly 1,000 booking agents without using a TP monitor, Amedro says.
Others favor TP-monitor alternatives such as RPCs (remote procedure calls) and stored database procedures. However, RPCs are not well-suited for load balancing and application-partitioning tasks, and stored procedures are limited to the database for which they were designed.
Mark Marcus, manager of advanced applications at Holiday Inn Worldwide headquarters in Atlanta, learned about the shortcomings of RPCs when he tapped them to handle decision-support system transactions between SCO Unix and UnixWare-based clients at hotel sites and Solaris-based servers in Atlanta. “That was real low-level, Unix-type coding. Now we have a TP monitor [Tuxedo] doing it out of the box, so we have a lot [fewer] proprietary processes,” Marcus says.
Likewise, stored procedures, which are built as database-application modules, are limited beyond load balancing and reliability functions. They can’t access resources outside the database, so users end up doing things in the client side, which can ultimately bog down performance, Finkelstein says.
Until client/server systems can mimic the mainframe’s transaction-processing predictability and reliability, most agree that TP monitoring can help. “It’s a concept that’s proven itself to be invaluable for decades. Trying to do transaction processing without a transaction monitor in any type of volume probably doesn’t make sense,” Prince says.
On the Transaction Processing Beat
A TP monitor helps large-scale C/S applications run smoother by directing the flow of transaction traffic from the client to the server in a reliable, secure and high-performance manner. Among its jobs:
Data integrity: to guarantee that updates happen on all relevant servers or no servers in the event one server goes down. The monitor can also queue a transaction until the download server is back on-line, assuring that the transaction occurs only once.
Security: to ensure that any given user or transaction is granted access only to specific resources.
Application partitioning: to minimize data traffic between the client and server by allowing users to partition applications to split the processing work between both platforms.
Load balancing: to distribute a transaction load across multiple servers to achieve higher throughput.