Object computation

Instructor's Guide


introduction themes paradigms lifecycle trends summary Q/A literature
Programming is, put briefly, to provide a computing device with the instructions it needs to do a particular computation. In the words of Dijkstra: `Programming is the combination of human reasoning and symbol manipulation skills used to develop symbol manipulators (programs). By supplying a computer to such a symbol manipulator it becomes a concrete one.' Although we are by now used to quite fashionable computing devices, including graphic interfaces and multimedia peripherals, the abstract meaning of a computing device has not essentially altered since the original conception of the mathematical model that we know as the Turing machine (see below).

Despite the fact that our basic mathematical model of a computing device (and hence our notion of computability) has not altered significantly, the development of high level programming languages has meant a drastic change in our conception of programming. Within the tradition of imperative programming, the introduction of objects, and object-oriented programming, may be thought of as the most radical change of all. Indeed, at the time of the introduction of Smalltalk, one spoke of a true revolution in the practice of programming.


The object model

Message

Encapsulation

Protocol


slide: The object model

The object model introduced by Smalltalk somehow breaks radically with our traditional notion of computation. Instead of regarding a computation as the execution of a sequence of instructions (changing the state of the machine), object-based computation must be viewed as sending messages between objects. Such a notion of computation had already been introduced in the late 1960s in the programming language Simula (see Dahl and Nygaard, 1966). Objects were introduced in Simula to simulate complex real-world events, and to model the interactions between real-world entities.

In the (ordinary) sequential machine model, the result of a computation is (represented by) the state of the machine at the end of the computation. In contrast, computation in the object model is best characterized as cooperation between objects. The end result then consists, so to speak, of the collective state of the objects that participated in the computation. See slide 2-model. Operationally, an object may be regarded as an abstract machine capable of answering messages. The collection of messages that may be handled by an object is often referred to as the protocol obeyed by the object. This notion was introduced in the Smalltalk programming environment originally to provide the means to group the messages to which an object may respond. For instance, the distinction between methods for initialization and methods for modification or processing may be convenient in developing or using a program. The notion of protocol may also be given a more formal interpretation, as has been done for instance in the notion of contracts (introduced in Eiffel) stating the requirements that must be adhered to in communicating with an object.

Structurally, an object may be regarded as a collection of data and procedures. In principle, the data are invisible from the outside and may be manipulated only by invoking the right procedure. In a pure object-oriented language such as Smalltalk and Eiffel, sending a message to an object is the only way of invoking such a procedure. Combined, data-hiding and message interface abstraction will be referred to as encapsulation. Actually, object-oriented languages, while in some way supporting objects as collections of data and procedures, may differ subtly in the degree and way in which they support data-hiding and abstraction.

Computability and complexity

Mathematically, a computing device consists of a finite table of instructions and a possible infinite memory in which to store intermediate results. In order to perform a computation the device also needs an input and some means by which to display the results.

For now, we need not be concerned with the precise mathematical details of our model of a computing device. For a very much more precise and elaborate description of the Turing machine, the interested reader is referred to  [Hopcroft]. What is important, however, is that this model captures in a very precise sense the notion of computation, in that it allows us to characterize what can be computed, and also what a computation will cost, in terms of computing time and memory usage.

An interesting, but perhaps somewhat distressing, feature of the Turing machine model is that it is the strongest model we have, which means that any other model of computation is at best equivalent to it. Parallel computation models in effect do extend the power of (sequential) Turing machines, but only in a linear relation with the number of processors. In other words, the Turing machine defines what we may regard as computable and establishes a measure of the complexity of a computation, in space and time. The awareness of the intrinsic limitations imposed by a precise mathematical notion of computability has, for example, led us to regarding the claims of artificial intelligence with some caution, see  [Rabin74]. However, the theoretical insight that a problem may in the worst case not be solved in finite time or space should not hinder us in looking for an optimal, approximate solution that is reachable with bounded resources.

An equally important feature of the Turing machine model is that it gives us an illustration of what it means to program a computing device, that is to instruct the machine to perform actions dependent on its input and state. As an extension to the model, we can easily build a universal computing device, into which we may feed the description of some particular machine, in order to mimic the computation of that machine. Apparently, this gives us a more powerful machine. However, this has proven not to be the case. Neither does this universal device enlarge the class of computable problems, nor does it affect in any significant sense the computational complexity of what we know to be computable. See slide 2-devices.


Computing devices

Object-oriented programming does not enlarge the class of computable problems, nor does it reduce the computational complexity of the problems we can handle.
slide: Computing devices

Interestingly, there is an extension of the (basic and universal) Turing machine model that allows us to extend the narrow boundaries imposed by a mathematical characterization of computability. This extension is known as an oracle machine, and as the name suggests, the solution to an (otherwise) intractable problem must come from some external source, be it human, machine-like or divine (which is unlikely). Partly, this explains why intelligent systems (such as automatic translation systems) are, to a certain extent, intrinsically interactive, since only the human user can provide the (oracle) information needed to arrive at a solution.

Our model of a computing device does quite precisely delimit the domain of computable problems, and gives us an indication of what we can expect the machine to do for us, and what not. Also, it illustrates what means we have available to program such a device, in order to let it act in the way we want. Historically, the Turing machine model may be regarded as a mathematical description of what is called the Von Neumann machine architecture, on which most of our present-day computers are based. The Von Neumann machine consists of a memory and a processor that fetches data from the memory, does some computation and stores the data back in memory. This architecture has been heavily criticized, but no other model has yet taken its place. This criticism has been motivated strongly by its influence on the practice of programming. Traditionally, programs for the Von Neumann architecture are conceived as sequences of instructions that may modify the state of the machine. In opposition to this limited, machine-oriented view of programming a number of proposals have been made that are intended to arrive at a more abstract notion of programming, where the machine is truly at the service of the programmer and not the other way around.

One of these proposals to arrive at a more abstract notion of programming is advocated as the object-oriented approach. Before studying the intrinsics of the object-oriented approach, however, it may be useful to reflect on what we may expect from it. Do we hope to be able to solve more problems, or to solve known problems better? In other words, what precisely is the contribution of an object-oriented approach?

Based on the characterization of a computing device, some answers are quite straightforward. We cannot expect to be able to solve more problems, nor can we expect to reduce the computational complexity of the problems that we can solve. What an object-oriented approach can contribute, however, is simply in providing better means with which to program the machine. Better means, to reduce the chance of (human) errors, better means, also, to manage the complexity of the task of programming (but not to reduce the computational complexity of the problem itself). In other words, by providing abstractions that are less machine oriented and more human oriented, we may enlarge the class of problems that we can tackle in the reality of software engineering. However, we simply cannot expect that an object-oriented approach may in any sense enlarge our notion of what is computable.

Some history

In the last few decades, we have been able to witness a rapid change in the technology underlying our computer systems. Simultaneously, our ideas of how to program these machines have changed radically as well. The history of programming languages may be regarded as a progression from low level constructs towards high level abstractions, that enable the programmer to specify programs in a more abstract manner and hence allow problem-related abstractions to be captured more directly in a program. This development towards high level languages was partly motivated by the need to be able to verify that a program adequately implemented a specification (given in terms of a formal description of the requirements of an application). Regarded from this perspective, it is then perhaps more appropriate to speak of a progression of paradigms of programming, where a paradigm must be understood as a set of mechanisms and guidelines telling us how to employ these mechanisms.

The first abstraction mechanism beyond the level of assembler language and macros is provided by procedures. Procedures play an important role in the method of stepwise refinement introduced by the school of structured programming. Stepwise refinement allows the specification of a complex algorithm gradually in more and more detail. Program verification amounts to establishing whether the implementation of an algorithm in a programming language meets its specification given in mathematical or logical terms. Associated with the school of structured programming is a method of verification based on what has become known as Hoare logic, which proceeds by introducing assertions and establishing that procedures meet particular pre- and post-conditions.

Other developments in programming language research are aimed at providing ways in which to capture the mathematical or logical meaning of a program more directly. These developments have resulted in a number of functional programming languages (e.g. ML, Miranda) and logic programming languages, of which Prolog is the best-known. The programming language Lisp may in this respect also be regarded as a functional language.

The history of object-oriented programming may be traced back to a concern for data abstraction, which was needed to deal with algorithms that involved complex data structures. The notion of objects, originally introduced in Simula (Dahl and Nygaard, 1966), has significantly influenced the design of many subsequent languages (e.g. CLU, Modula and Ada). The first well-known object-oriented language was Smalltalk, originally developed to program the Dynabook, a kind of machine that is now familiar to us as a laptop or notebook computer. In Smalltalk, the data-hiding aspect of objects has been combined with the mechanism of inheritance, allowing the reuse of code defining the behavior of objects. The primary motivation behind Smalltalk's notion of objects, as a mechanism to manage the complexity of graphic user interfaces, has now proven its worth, since it has been followed by most of the manufacturers of graphic user interfaces and window systems.

Summarizing, from a historical perspective, the introduction of the object-oriented approach may be regarded as a natural extension to previous developments in programming practice, motivated by the need to cope with the complexity of new applications. History doesn't stop here. Later developments, represented by Eiffel, C++ (to a certain extent) and Java, more clearly reflect the concern with abstraction and verification, which intrinsically belongs to the notion of abstract data types as supported by these languages.