Previous Table of Contents Next


Notice that you also are writing to a log file. This is important for a homespun gateway, to keep track of every step of the process of data movement from the mainframe to the Oracle instance.

In a more realistic case, of course, you would just use the frequent-flier number of the customer and not a nonunique name. But using the name illustrates that you are updating a customer’s miles based on new travel for the month. Leave it here for teaching purposes only.

Instead of a direct update from a file, you can simply load the DB2 file directly into an intermediate Oracle table called monthly_upload. Use a SQL*Loader script (Oracle Loader) to accomplish this. After this is tested, you can write a trigger to automatically update the customer_miles table whenever data is inserted into the monthly_upload table. This table is your record of the previous month’s transfer. If late transactions need to be loaded later, they can be loaded without affecting your Passive Gateway. This would be the trigger used to update your customer_miles table:


 1: Create or replace trigger customer_miles_ins

 2: Before insert

 3: on MYDATABASE.Monthly_load

 4: For each row

 5: Begin

 6:     Update customer_miles

 7:     Set total_miles = total_miles + :new.miles_loaded

 8:     Where customer_name = :new.customer_name;

 9:

10: EXCEPTION

11:     WHEN others THEN

12: Raise application_error(-20000,’update of ‘ ||:new.customer_name||’

[special character]for ‘ ||:new.miles_loaded||’ failed.’);

13:

14: End;

15: /

In a production system, your error-handling would probably be more exact. Yet with these simple SQL scripts on the mainframe and the UNIX machine, you can create a Passive Gateway that both databases are unaware of, yet is vital in keeping both systems synchronized.

Of course, we are using the network software to convert the character sets; furthermore, we are planning our DB2 COBOL download so that problems such as “packed decimal” numbers won’t need to be translated on the Oracle end of the process. We can even eliminate the network and have a very Passive Gateway where each month a tape would be physically mailed to the Oracle site and read from tape by a PRO*C process (see Figure 8.8).


Figure 8.8.  Mainframe data transferred into the Oracle RDBMS via overnight mail due to unacceptable transfer rates from the mainframe.

Building a Passive Gateway, Oracle to Mainframe Example

In today’s computing world, a gateway from an Oracle application on an Open System back to a mainframe is more common. Many times, corporations have slowly moved applications off of mainframes in their downsizing efforts, yet need these new Oracle applications to communicate back to the mainframe and the other applications that have not been downsized.

Again assume we have a frequent-flier database over the Web, but we need to send reward flights for our best customers into the accounting system every month end. Our accounting system is running on an ancient system using VSAM files and is remote from us on a distant mainframe. In this case, the data needs to move from the Oracle RDBMS to the mainframe (Figure 8.9).


Figure 8.9.  SQL*Plus extract of data for an FTP cooperative processing upload back into a mainframe.

In this scenario, you need to convert Oracle data into a flat file and then move this file to MVS so a process can read it into a VSAM file format. To move data out of Oracle and into a flat file, use the trusted product SQL*Plus. SQL*Plus enables you to spool any SQL statement with formatting options. You need formatting options to create a fixed-length file that is easy for your mainframe friends to read.

In this example, you download data from the Oracle server on reward flights that have been awarded to customers in a given month. Each month, you will refresh the mainframe with this data:

You need to join two Oracle tables, Customer_Miles and Customer_Flight. Here’s the SQL*Plus statement with the formatting commands necessary to create a fixed-length data file with this information.

 1: set pagesize 0

 2: set linesize 80

 3:

 4: set term off

 5: set feed off

 6: set echo off

 7: set show off

 8: set veri off

 9: set head off

10:

11: col customer_id        format9999999999

12: col reward_miles        format 999,990.00

13: col depart_date        format a10

14: col return_date        format a10

15: col flight_no            format 99999

16:

17:

18: spool outputfile

19:

20: Select

21: cm.customer_id,

22: cm.reward_miles,

23: to_char(cf.depart_date, ‘mm/dd/yyyy’),

24: to_char(cf.return_date, ‘mm/dd/yyyy’),

25: cf.flight_no

26: from

27: Customer_Miles cm,

28: Customer_Flight cf

29: where cm.customer_id = cf.customer_id

30: and   to_char(cf.depart_date,’MM/YYYY’) = ‘&1’

31: /

32:

33: spool off

34:

35: exit

36: /

This SQL will be executed by a cron file at the end of each month. The constant ‘&&1’ accepts the first parameter that, in this case, will be the two-digit month code and the year. When the script fires for December 1999, this six-digit code will be 121999. No year 2K problem here!

After this script creates the output file outputfile, the cron job fires again and calls an FTP process using the put command to move the output file to the mainframe.

At this point, the spool file can be loaded into a VSAM file and the mainframe accounting package can access the new data.

One important note here: We performed a join in our SQL because we are operating under the relational paradigm of normalized data. In most cases, nonrelational data appears “flatter” and might necessitate many joins of our tables. This is a common phenomenon when moving data from a relational database to an older mainframe database that is nonrelational.

By writing a few simple programs, we have bypassed the need to use expensive networking and gateway products. We can add more complexity, such as error and status logs, that would be passed along with the data between our “open” Oracle system and the legacy mainframe with which we want to coexist.

Coexistence and Replication

The purest form of database coexistence is a distributed database. A distributed database appears as one database to users and developers, yet is a series of separate databases at the physical level. To accomplish this task, we need to replicate data on these different databases so that at the higher levels data appears uniform regardless of which system we query or modify.


Previous Table of Contents Next
Используются технологии uCoz