Previous Table of Contents Next


Network Computing Architecture

The Network Computing Architecture (NCA) builds on existing Web technology, incorporates various standards, and provides the first framework within which serious, mission-critical application development and deployment in a networked environment are possible. Also known as Oracle’s Open Network Computing Architecture, the NCA realizes the potential of Web technology for cost-effective cross-platform application deployment by integrating the strengths of traditional client/server programming with distributed objects. In a word, the NCA is revolutionary.

The core of the Network Computing Architecture is based on the CORBA distributed object model.

CORBA, the Common Object Request Broker Architecture, is an Object Request Broker (ORB). It was established by the Object Management Group (OMG), a consortium of over 700 companies representing the entire computer industry. (The only exception: Microsoft is promoting their own ORB standard called DCOM.)

A CORBA ORB is interoperable, meaning that objects (such as a COBOL program, or a set of C++ classes) do not need to know the language-specific issues required to invoke another object, but instead can abide by published interface requirements, and invoke any other object regardless of its specific implementation. Using CORBA’S Interface Definition Language (IDL), existing applications (for example, COBOL, CICS, IMS, and so on) can be “wrapped” and made to look like an object on the ORB. Using CORBA 2.0’s Internet Inter-ORB Protocol (IIOP) services (say that three times fast), any ORB can connect to any other ORB on the network.

In March of 1996, Marc Andreesen of Netscape said “The next shift catalyzed by the Web will be the adoption of enterprise systems based on distributed objects and IIOP (Internet Inter-ORB Protocol). IIOP will manage the communication between the object components that power the system. Users will be pointing and clicking at objects available on IIOP-enabled servers.”

CORBA advances the ability of distributed objects to communicate across the network. The NCA takes this further, to build a foundation on which diverse applications can be implemented and communicate within the same architecture.

The NCA consists of three platforms: (1) the universal data server, (2) the application server, and (3) the universal client.

The Universal Data Server. This could be the Oracle8 RDBMS, but the WAS will work well with non-Oracle data servers as well.

The Universal Application Server. This is the WAS 3.0, including the listener, Web request broker, and cartridges (applications).

The Universal Client. This could be a PC with browser software, such as Oracle’s Power Browser, Netscape Navigator, or the Internet Explorer. It could also be a computer known as a thin client, an inexpensive stripped down PC with few of the peripherals that most PCs today come with. All that is required is a monitor, keyboard, modem or network card, browser software, and either a local hard drive or some other mechanism to cache Web pages. Oracle’s vision for the thin client is the Network Computer, or NC, from the Oracle subsidiary Network Computer Inc. (NCI).

These three platforms, and the entire NCA, are depicted in Figure 27.3.


Figure 27.3.  The Network Computing Architecture (NCA).

To understand the primary goal and benefits of the NCA, it’s helpful to understand some things about the history of network architecture.

Background

Most corporations and organizations set up their own private networks long before the advent of the Internet, and even while the Internet was quietly growing. Most organizations still prefer and require some sort of closed private network. This section describes the evolution of these networks, and how the Web is impacting them today.

Before client/server networks, the host-based model was widely used. Host-based networks consisted of a series of “dumb” terminals connected to one or more large mainframe computers. The dumb terminals didn’t run any software other than the bare necessities to connect to the mainframe. All application software and data was resident on the mainframe, and dumb terminals were essentially a viewport into the mainframe. Users worked on dumb terminals to run software on the mainframe. Among the advantages of a host-based system: centralized maintenance, such as backups and upgrades. There were many disadvantages, though. Mainframes tended to be closed systems— software that ran on one brand of mainframe wouldn’t run on another. Operating systems were very different. The cost of switching from one vendor’s system to another was often prohibitively high. Furthermore, since the mainframe was extremely expensive to begin with (many cost millions of dollars), the system wasn’t horizontally scalable. Once the user capacity of one mainframe reached its maximum, to add just one more user meant adding another mainframe—a major investment in hardware. Nor was it vertically scalable—if the application required just a little more CPU or storage, and the mainframe was maxed out, the investment required could be significant.


Previous Table of Contents Next
Используются технологии uCoz