Previous Table of Contents Next


Finally, graphical user interfaces (GUI), which are intuitive and require little training or lead time to get users operational, were not practical in a host-based system. (Contrary to popular belief, GUIs were created in the 1960s by Xerox, not by Apple or Microsoft). A graphical user interface manages communication with the user by a complex set of graphic images, icons, buttons, check boxes, and detailed reaction to mouse movements. Since all applications ran on the host, and since the programs that manage GUI displays are, by nature, communications-intense, GUI displays tended to clog up the network. The network was (and is) the weak link in the chain, and couldn’t handle the extra load while still supporting all of its other responsibilities. So while GUIs were technically possible, they were definitely impractical.

All of this changed with the client/server model, which consists of a series of personal computers (PCs) linked together in a Local Area Network (LAN), all connected to a single inexpensive central file server or some other dedicated PC set aside to be the network’s server. Files common to everyone are stored on the server; files that the client uniquely requires are stored on the client. The entire network cost could be counted in the thousands of dollars, rather than in the millions. By incorporating PCs instead of dumb terminals, the responsibility of executing applications can be shared by the client, offloading most of the work from the server, and allowing a less powerful server to handle more clients more effectively.

The client/server model offers many advantages. It can support different hardware and software from different vendors. It’s possible to link PCs and Macintosh computers by using all sorts of network and application software, yet have everything connected in one LAN. Also, client/server networks scale easily. To add another user usually means just adding another PC. If the server reaches its capacity of how many PCs it can support, the cost of another server is incidental compared to the host-based mainframe. The same is true for applications and data—the cost is relatively low to accommodate software growth. And since each user is running a PC on their desktop, GUI management can be handled on the client, instead of at the server, which eliminates the network bottleneck for GUIs that existed in the host-based model, making advanced GUI displays very practical.

But client/server systems are not centralized. The maintenance costs of a client/server system can be deceptively high. If something goes wrong, the responsibility of maintenance can be difficult to assign—is the problem in the PC? The Macintosh? The network itself? The server? Different operating systems, applications, and diverse resources combined in one network, which was first presented as a great advantage, have the potential for creating a maintenance nightmare if the hardware and software diversity gets out of hand. A troubleshooter must have familiarity with the entire system, and MIS departments find it difficult to identify personnel that individually possess a working knowledge of all the components. Furthermore, software and hardware upgrades, application deployment, and maintenance work on the network requires either the MIS department to travel from PC to PC, or requires each user to bear some of the burden.

Enter Tim Berners-Lee, the father of the World Wide Web (mentioned earlier). While working at the European Laboratory for Particle Physics (CERN), in 1989, Berners-Lee’s vision was to support the widespread distribution of files, including text and images, that combined the best elements of both the host-based and client/server models. He moved all of the files off of the client and back to the server. The Web browsers he created are really a generic application to handle the complex GUI management on the client, instead of the server, making GUI systems across the network practical for the first time. For more information, see the previous discussion about Web Architecture.

Interestingly, the Web really represents something of a combination of host-based networks and client/server networks. But it’s designed for serving files and, even with the addition of CGI (described earlier), the Web cannot handle complex applications.

Oracle’s Network Computing Architecture builds on Web technology to make complex applications possible. The NCA represents the best of the host-based model with the best of the client/server. In the NCA, as Sun Microsystems puts it, the “network is the computer.” As in the Web, the NCA moves all applications and data back to the server, and leaves a Web browser, the generic GUI management tool, on the client. In NCA terminology, the client is often called a thin client, something between a PC and a dumb terminal. It doesn’t have the peripherals (CD-ROM, floppy drive, and so on), except for possibly a relatively small hard drive or some other feature to cache pages from the browser. Instead, all maintenance, backups, upgrades, and so on are done on the network, which can be an intranet or the Internet, and programs are downloaded across the network on an as-needed basis.

The NCA is fully scalable, allowing any combination of hardware to serve client requests. It is easy to maintain—in fact, users don’t have to maintain anything. Instead, the network can be managed effectively by a professional team, doing backups and upgrades, and freeing the user from having to address those issues. If the system has a problem, the network team addresses it, and the users are not bothered with it. In many ways, it’s a return to the original host-based computing platform, when users worked on dumb terminals and ran software that was stored only on the host.


Previous Table of Contents Next
Используются технологии uCoz