Implementation techniques -- Self

Instructor's Guide


intro, paradigms, comparison, design, prototypes, architectures, summary, Q/A, literature
A major concern of software developers is (often) the runtime efficiency of the system developed. An order of magnitude difference in execution speed may, indeed, mean the difference between acceptance and rejection.

Improving performance


slide: Improving performance

There are a number of ways in which to improve on the runtime efficiency of programs, including object-oriented programs. For example,  [Ungar92] mention the reliance on special-purpose hardware (which thus far has been rapidly overtaken by new general-purpose processor technology), the use of hybrid languages (which are considered error-prone), static typing (which for object-oriented programming provides only a partial solution) and dynamic compilation (which has been successfully applied for Self). See slide 5-opt. As for the use of hybrid languages, of which C++ is an example, the apparent impurity of such an approach may (to my mind) even be beneficial in some cases. However, the programmer is required to deal more explicitly with the implementation features of the language than may be desirable. In general, both with respect to reliability and efficiency, statically typed languages have a distinct advantage over dynamically typed (interpreted) languages. Yet, for the purpose of fast prototyping, interpreted languages (like Smalltalk) offer an advantage in terms of development time and flexibility. Moreover, the use of (polymorphic) virtual functions and dynamic binding necessitate additional dynamic runtime support (that is not needed in strictly procedural languages). Clever compilation reduces the overhead (even in the case of multiple inheritance) to one or two additional indirections.

Dynamic compilation

The language Self is quite pure and simple in design. It supports objects with slots (that may contain both values and code, representing methods), shallow cloning, and implicit delegation (via a designated parent slot). Moreover, the developers of Self have introduced a number of techniques to improve the efficiency of prototype-based computing.

Self -- prototypes

Dynamic compilation -- type information


slide: Dynamic compilation -- Self

The optimization techniques are based on dynamic compilation, a technique that resembles the partial evaluation techniques employed in functional and logic programming. Dynamic compilation employs the type information gathered during the computation to improve the efficiency of message passing.

Whenever a method is repeatedly invoked, the address of the recipient object may be backpatched in the caller. In some cases, even the result may be inlined to replace the request. Both techniques make it appear that message passing takes place, but at a much lower price. More complicated techniques, involving lazy compilation (by delaying the compilation of infrequently visited code) and message splitting (involving a dataflow analysis and the reduction of redundancies) may be applied to achieve more optimal results.

Benchmark tests have indicated a significant improvement in execution speed (up to 60% of optimized C code) for cases where type information could be dynamically obtained. The reader is referred to  [Ungar92] for further details.