What is the database server

archive

The term client-server (C / S) can generally be understood as a model for application development. To put it more precisely, client-server is a solution that defines a distribution of roles and tasks between client and server, the servant and the operator.

According to their roles, both have to exchange ideas. This means that the client must be able to direct queries to the server, which in turn must be able to answer the queries. The quality of the request and the response defines the respective levels as well as the advantages and disadvantages of C / S technologies.

The simplest level of C / S computing is - even if only from a purely academic point of view - the use of a file system server. For example, a Netware server enables files to be exchanged across different workstations. The client, the workstation, directs requests to the server.

The quality of communication is important with this C / S model. In the case of a file system server, the client and server have to talk to each other at the level of file operation such as "open file" or "read the next 500 bytes".

The disadvantages of this lowest level of the C / S model come into play in the context of more complex operations such as a database operation. Since the protocol between the client and server is at the file operation level, considerable amounts of data travel between the two over the network. Especially in the context of Dbase, Btrieve or Isam solutions (index sequential access method), performance and stability problems arise here, caused by the increasing complexity of the application solutions and a constantly growing number of users in the network.

Additional difficulties arise with these network-oriented applications when a workstation or the client cannot completely process a database operation. The result may be that information on the client side is considered processed (for example the update of an index file), but the operation on the server side, such as the database update, is not completed. Since the protocol does not allow a corresponding notification to the client station at the file operation level, inconsistent data sets are almost inevitable.

The quality of a C / S solution consequently results from the quality of the protocol. In the case of a network application which manages the databases centrally on a file system server, the quality should be set so low that this application model generally does not acquire the attribute of a C / S solution.

These problems can be dealt with by changing the quality of the protocol, as is the case with SQL-based database management systems (DBMS). They are also increasingly gaining acceptance in the PC desktop environment and are considered to be classic representatives of the first-generation C / S model.

The client no longer carries out its queries at the level of the file operation, but at the level of the database operation (for example: "Open database", "Change data record" etc.). The log level is no longer on the physical description level of the file structures, but on a logical layer of the database structures.

The database server is then responsible for the entire database including the index files. The result is that the application becomes more stable and there is a clear gain in processing speed. This is because the amount of information that is transported between the client and server is significantly reduced.

The implementation model of a first generation C / S solution can be represented as follows:

With the exception of database operations, the client application handles all tasks, from the user interface to the application logic and business rules. On the server side there is an "intelligent" DBMS that provides the necessary database operations at the necessary level of abstraction. In the context of SQL-DBMS, the delete, insert, update or select commands should be mentioned here.

The advantages are obvious: The server performs file operations on the database as a closed (atomic) unit and guarantees their success. The failure of a single client station can therefore no longer jeopardize the integrity of the entire application solution.

A first-generation C / S application thus solves numerous difficulties that arise in an expanding IT infrastructure. However, there are still problems that arise from the constant growth and the ever new need for information. The main problem is that in a first generation C / S solution, the business rules must be implemented on the client side.

In the case of larger application solutions that are composed of several independent programs on different platforms, this leads to source code redundancies. This leads to maintenance problems, since every change to a business rule entails the need to update several source code bases with the corresponding costs.

The expansion of the performance characteristics of an application solution always leads to an increase in the code and resource requirements of the client in the context of a first-generation client-server system. This results in a possible upgrade of the hardware resources at all client stations or even a change of operating system, as the platform no longer supports the increasing demands on a solution.

These constantly growing requirements have meanwhile become a decisive problem factor for such application solutions. A separate term, "Fat Client", has been created for this phenomenon. In addition, a large number of such C / S solutions are already experiencing an no longer justifiable cost explosion.

SQL manufacturers address this fact through mechanisms such as stored procedures: business rules that are implemented on the server and that the server executes depending on the event such as update, delete or insert. The advantages are clear: some of the business rules only need to be implemented once and can be immediately available to all clients of the database server.

However, setting up and testing stored procedures is difficult in practice and can sometimes not be done with the tools available from the compiler manufacturer. In addition, they have to be implemented in classic system or high-level languages ​​such as C, C ++, Cobol or Rexx. Some SQL manufacturers have therefore provided their SQL servers with their own script languages, which are intended to enable a better degree of integration and better maintainability of this code.

If one takes a closer look at the two described C / S models, three logical levels or layers can be clearly identified:

-User interface and interaction model,

-Business and application rules as well

-Database access and management.

In the context of a network application, all three layers are implemented on the client side. The first generation C / S model outsources the third level and transfers responsibility for the database to an "intelligent" DBMS installed on the server. Some SQL manufacturers also offer the option of using stored procedures to outsource a small part of the second layer, the business rules, to the database server and thus physically to another location.

In the case of C / S solutions of the second generation, on the other hand, the three logical layers of an application solution are physically mapped one-to-one. As a result, second-generation C / S solutions have also been given the names three-tier model or three-tier model. The basic idea of ​​the second generation C / S model is again to improve the quality of the protocol with which the client and server communicate and assign tasks to each other.

In this model, the client no longer directs requests to the server that are at the level of the file or database operation. Rather, its requests to the server are on a higher, more abstract level, which is defined by the business logic.

An example: The client sends the job "Delete customer no. 1410-65" to the server. The server then not only has to initiate the deletion process, but also all associated checks and follow-up operations. The server knows all business rules and executes them.

The servers take on a new meaning

This gives the server side a new meaning. Such computers are therefore called application servers or, in a simpler form, transaction servers. This designation results from the changed distribution of tasks.

As part of such a solution, only the user interface needs to be implemented on the client side. The application logic and above all the business logic are implemented on a central server, the application server.

This form of task and role distribution means that it is no longer necessary to upgrade the client stations when expanding an existing application solution, for example. This is because a large part of the additional resource requirements are found at a central point, the application server.

It is also possible to operate different client platforms inexpensively, since only the user interface has to be installed on the client side. A change in the business rules can be implemented at a central point without having to do anything at the client stations.

In IT practice, application servers are provided as so-called workgroup servers. This means, for example, that the sales group has installed a local server that provides the relevant business rules and possibly a "local" DBMS system.

Workgroup servers are usually based on OS / 2 or Windows NT, but for historical reasons there are also Unix derivatives here. The local workgroup servers are connected to a DBMS back-end or via a network.

Many wishes are still unfulfilled

The implementation of application servers is currently still complex, since no modular system is known that would allow such servers to be implemented quickly and efficiently. The only exception is the "Galaxy Application Server" announced by Alaska Software.

As a rule, a large number of tools are currently in use - mostly C or C ++ development packages. These are coupled with so-called middleware products such as "MQ Series" from IBM or "EZ-RPC" from Noblenet in order to ensure cross-platform execution of functionalities. Somewhat more complex and partly proprietary approaches via remote OLE, DSOM or transaction servers can also be found.

What would be desirable, however, would be a kind of modular system that offers ready-made interface models, communication protocols, database access, synchronization, authentication, etc. in order to relieve the application developer in his work.

If you want to equip your existing database system with more logic, such as business rules, you should think about the possibility of using application server systems. This basically results in improved maintainability and scalability of the created applications and thus a better cost situation.

This particularly applies to heterogeneous networks in which client stations are equipped with different operating systems. In complex applications, the "natural" separation into user interface, business logic and database access also allows more efficient software development, since the various tasks can be clearly assigned to different developer groups.

Clicked

Client-server systems are generally understood to be state-of-the-art application solutions that are characterized by powerful client architectures with graphical user interfaces and one or more database servers in the back-end. In relevant specialist publications, however, terms such as "application server", "three tier model" or "second generation client server" are increasingly being used. What is behind these terms and what are the advantages, disadvantages and differences of the respective approaches?

glossary

Xbase: Xbase is a very powerful language that shines with its high level of abstraction and seamless database integration. The best known representatives are Dbase, Foxpro and Clipper. Unfortunately, Xbase is usually equated with the DBF databases behind it. However, this no longer applies and even suppresses the essential features of Xbase such as performance and language simplicity.

Btrieve: Btrieve is a frequently used network-oriented database system that was probably the best-known pure ISAM (Index-sequential Access Method) representative in the PC desktop area.

Isam: The Index-Sequential Access Method stands for the two representatives mentioned above and describes the process according to which database management was operated here.

Front-end: In the context of an application solution, the interface to the user is the front-end, as it is visible to him.

Back-end: The database management system is usually understood as the back-end, as it is mostly "invisible" to the user.

DSOM: The Distributed System Object Model is a programming language-independent object encapsulation model that IBM implements on the most important platforms such as OS / 2, Windows, AIX, OS / 400 and MVS, based on the Corba standard of the Object Management Group (OMG) has and offers.

* Steffen Pirsig is Technical Director at Alaska Software GmbH in Frankfurt.