topical media & game development
Instructor's Guide
intro,
types,
algebra,
modules,
classes,
summary,
Q/A,
literature
Subsections:
Algebraic specification techniques have
been developed as a means to specify the design
of complex software systems in a formal way.
The algebraic approach has been motivated by
the notion of
information hiding
put forward in
[Parnas72a] and the ideas
concerning
abstraction expressed in
[Ho72].
Historically, the ADJ-group (see Goguen
et al., 1978) provided
a significant impetus to the algebraic approach
by
showing that abstract data types may be interpreted
as (many sorted) algebras.
(In the context of algebraic specifications the
notion of
sorts has the same meaning as
types.
We will, however, generally speak of
types.)
As an example of an algebraic specification,
look at the module defining the data type
Bool,
as given in slide
[8-Bool].
Algebraic specification -- ADT
Bool
adt bool is
functions
true : bool
false : bool
and, or : bool * bool -> bool
not : bool -> bool
axioms
[B1] and(true,x) = x
[B2] and(false,x) = false
[B3] not(true) = false
[B4] not(false) = true
[B5] or(x,y) = not(and(not(x),not(y)))
end
slide: The ADT Bool
In this specification two constants are introduced
(the zero-ary functions
true and false), three functions (respectively
and, or and not).
The or function is defined by employing not
and and, according to a well-known logical law.
These functions may all be considered to be (strictly)
related to the type bool.
Equations are used to specify the desired characteristics
of elements of type bool.
Obviously, this specification may mathematically be interpreted
as (simply) a boolean algebra.
Mathematical models
The mathematical framework of algebras allows for
a direct characterization of the behavioral
aspects of abstract data types by means of equations,
provided the specification is consistent.
Operationally, this allows for the execution
of such specifications by means of term rewriting,
provided that some (technical) constraints are met.
The model-theoretic semantics of algebraic
specifications centers around the notion
of initial algebras, which gives
us the preferred model of a specification.
To characterize the behavior of objects
(that may modify their state) in an algebraic way,
we need to extend the basic framework of initial
algebra models either by allowing
so-called multiple world semantics or by
making a distinction between hidden and observable
sorts (resulting in the notion of
an object as an abstract machine).
As a remark, in our treatment we obviously cannot
avoid the use of some logico-mathematical formalism.
If needed, the concepts introduced will be explained
on the fly.
Where this does not suffice, the interested reader
is referred to any standard textbook on mathematical
logic for further details.
Signatures -- generators and observers
Instructor's Guide
intro,
types,
algebra,
modules,
classes,
summary,
Q/A,
literature
Abstract data types may be considered as modules
specifying the values and functions belonging
to the type.
In [Dahl92], a type T is characterized as a tuple specifying
the set of elements constituting the type T
and the collection of functions related
to the type T.
Since constants may be regarded as zero-ary functions
(having no arguments), we will speak of a signature
or defining a particular type T.
Also, in accord with common parlance, we will speak
of the sorts , which are the sorts (or types)
occurring in the declaration of the functions in .
See slide [8-signature].
slide: Algebraic specification
A signature specifies the names and (function) profiles
of the constants and functions of a data type.
In general, the profile of a function is specified as
where are the sorts defining
the domain (that is the types of the arguments)
of the function f, and s is the sort defining the codomain
(or result type) of f.
In the case the function f may be regarded as
a constant. More generally, when are all
unrelated to the type T being defined, we may regard
f as a relative constant.
Relative constants are values that are assumed to
be defined in the context
where the specification is being employed.
The functions related to a data type T may be discriminated
according to their role in defining T.
We distinguish between producers ,
that have the type T under definition as their result
type, and observers , that have T
as their argument type and deliver a result of
a type different from T.
In other words, producer functions define how
elements of T may be constructed.
(In the literature one often speaks of constructors,
but we avoid this term because it already has a
precisely defined meaning in the object-oriented
programming language C++.)
In contrast, observer functions do not produce
values of T, but give instead information
on some particular aspect of T.
The signature of a type T is uniquely defined
by the union of producer functions and
observer functions .
Constants of type T are regarded as a subset of
the producer functions defining T.
Further, we require that the collection of producers is
disjoint from the collection of observers for T,
that is .
Generators
The producer functions actually defining the
values of a data type T are called the
generator basis of T, or generators of T.
The generators of T may be used to enumerate the
elements of T, resulting in the collection of T values
that is called the generator universe in [Dahl92].
See slide [8-basis].
slide: Generators -- basis and universe
The generator universe of a type T consists of the
closed (that is variable-free) terms that may be constructed
using either constants or producer functions of T.
As an example, consider the data type Bool with
generators t and f.
Obviously, the value domain of Bool, the
generator universe consists only of the values
t and f.
As another example, consider the data type Nat
(representing the natural numbers) with generator basis
, consisting of the constant 0
and the successor function
(that delivers the successor of its argument).
The terms that may be constructed by is the
set ,
which uniquely corresponds to the natural numbers
.
(More precisely,
the natural numbers are isomorphic with .)
In contrast, given a type A
with element a, b, ..., the generators of result in
a universe that contains terms such as
and
which we would like
to identify, based on our conception of a set
as containing only one exemplar of a particular value.
To effect this we need additional equations imposing
constraints expressing what we consider as the
desired shape (or normal form) of the values
contained in the universe of T.
However, before we look at how to extend a signature
defining T with equations defining the
(behavioral) properties of T we will look
at another example illustrating how the choice
of a generator basis may affect the structure
of the value domain of a data type.
In the example presented in slide [8-Seq],
the profiles are given of the functions
that may occur in the signature specifying sequences.
(The notation _ is used to indicate parameter positions.)
slide: The ADT Seq
Dependent on which producer functions are selected to
generate the universe of T, the correspondence
between the generated universe and the intended domain
is either one-to-one
(as for G and ) or many-to-one
(as for ).
Since we require our specification to be first-order
and finite, infinite generator bases (such as )
must be disallowed, even if they result
in a one-to-one correspondence.
See [Dahl92] for further details.
Equations -- specifying constraints
Instructor's Guide
intro,
types,
algebra,
modules,
classes,
summary,
Q/A,
literature
The specification of the signature of a type
(which lists the syntactic constraints
to which a specification must comply)
is in general not sufficient to characterize
the properties of the values of the type.
In addition, we need to impose semantic constraints
(in the form of equations) to define the meaning
of the observer functions and (very importantly)
to identify the elements of the type domain
that are considered equivalent (based on
the intuitions one has of that particular type).
slide: Equivalence
Mathematically, the equality predicate may
be characterized by the properties listed above,
including reflexivity (stating that an element
is equal to itself), symmetry (stating that
the orientation of the formula is not important)
and transitivity (stating that if one element
is equal to another and that element is equal to
yet another, then the first element is also equal to
the latter).
In addition, we have the property that, given
that two elements are equal, the results
of the function applied to them (separately) are
also equal.
(Technically, the latter property makes a congruence
of the equality relation, lifting
equality between elements to the function level.)
See slide [8-equivalence].
Given a suitable set of equations, in addition to
a signature,
we may identify the elements that can be proved
identical by applying the equality relation.
In other words, given an equational theory
(of which the properties stated above must be a part),
we can divide the generator universe of a type T
into one or more subsets,
each consisting of elements that are equal
according to our theory.
The subsets of , that is GU
factored with respect to equivalence,
may be regarded as the abstract elements constituting the type
T,
and from each subset we may choose a concrete
element acting as a representative for the subset
which is the equivalence class of the element.
Operationally, equations may be regarded as
rewrite rules (oriented from left to right),
that allow us to transform a term in which
a term occurs as a subterm into a term
in which is replaced by if .
For this procedure to be terminating,
some technical restrictions must be met,
amounting (intuitively) to the requirement that the right-hand side
must in some sense be simpler than the left-hand side.
Also, when defining an observer function, we must specify for
each possible generator case an appropriate rewriting rule.
That is, each observer must be able to give
a result for each generator.
The example of the natural numbers, given below, will make
this clear.
Identifying spurious elements by rewriting a term into
a canonical form is somewhat more complex,
as we will see for the example of sets.
Equational theories
To illustrate the notions introduced above,
we will look at specifications of some familiar
types, namely the natural numbers and sets.
In slide [8-Nat], an algebraic specification is given of the natural
numbers (as first axiomatized by Peano).
Natural numbers
Nat
functions
0 : Nat
S : Nat -> Nat
mul : Nat * Nat -> Nat
plus : Nat * Nat -> Nat
axioms
[1] plus(x,0) = x
[2] plus(x,Sy) = S(plus(x,y))
[3] mul(x,0) = 0
[4] mul(x,Sy) = plus(mul(x,y),x)
end
slide: The ADT Nat
In addition to the constant 0 and successor function
S we also introduce a function mul for
multiplication and a function plus for addition.
(The notation Sy stands for application by
juxtaposition; its meaning is simply .)
The reader who does not immediately accept
the specification in slide [8-Nat] as an adequate
axiomatization of the natural numbers must
try to unravel the computation depicted in slide [8-symbolic].
mul(plus(S 0,S 0),S 0) -[2]->
mul(S(plus(S 0,0)), S 0) -[1]->
mul(SS 0,S 0) -[4]->
plus(mul(SS0,0),SS0) -[3]->
plus(0,SS0) -[2*]-> SS0
slide: Symbolic evaluation
Admittedly, not an easy way to compute with
natural numbers, but fortunately term rewriting
may, to a large extent, be automated (and actual calculations
may be mimicked by semantics preserving primitives).
Using the equational theory expressing the properties
of natural numbers, we may eliminate
the occurrences of the functions mul and plus
to arrive (through symbolic evaluation) at something
of the form (where n corresponds
to the magnitude of the natural number denoted by the term).
The opportunity of symbolic evaluation by term
rewriting is exactly what has made the algebraic
approach so popular for the specification of software,
since it allows (under some restrictions)
for executable specifications.
Since they do not reappear in what may be considered
the normal forms of terms denoting the naturals
(that are obtained by applying the evaluations induced by the
equality theory),
the functions plus and mul may be regarded as secondary
producers.
They are not part of the generator basis of the type Nat.
Since we may consider mul and plus as secondary
producers at best, we can easily see that when we define
mul and plus for the case 0 and Sx
for arbitrary x, that we have covered all possible (generator) cases.
Technically, this allows us to prove properties of these
functions by using structural induction on the possible generator cases.
The proof obligation (in the case of the naturals) then is to
prove that the property holds for the function applied
to 0 and assuming that the property
holds for applying the function to x, it also
holds for Sx.
As our next example, consider the algebraic
specification of the type in slide [8-Set].
Sets
Set
Axioms
slide: The ADT Set
In the case of sets we have the problem that
we do not start with a one-to-one generator base
as we had with the natural numbers.
Instead, we have a many-to-one generator base,
so we need equality axioms to eliminate spurious
elements from the (generator) universe of sets.
slide: Equivalence classes for Set
The equivalence classes of
(which is factored by the equivalence
relation), each have multiple elements (except
the class representing the empty set).
To select an appropriate representative
from each of these classes (representing the abstract
elements of the type ) we need
an ordering on terms, so that we can take the smaller
term as its canonical representation.
See slide [8-set-equi].
Initial algebra semantics
Instructor's Guide
intro,
types,
algebra,
modules,
classes,
summary,
Q/A,
literature
In the previous section we have given a rather operational
characterization of the equivalence relation induced by
the equational theory and the process of term rewriting
that enables us to purge the generator universe of a type,
by eliminating redundant elements.
However, what we actually strive for is a mathematical
model that captures the meaning of an algebraic
specification.
Such a model is provided (or rather a class of such models)
by the mathematical structures known as
(not surprisingly) algebras.
A single sorted algebra is a structure
where A is a set of values, and specifies the signature
of the functions operating on A.
A multi-sorted algebra is a structure
where S is a set of sort names and the set of values belonging
to the sort s.
The set S may be ordered (in which case the ordering
indicates the subtyping relationships between the sorts).
We call the (multi-sorted) structure a .
slide: Interpretations and models
Having a notion of algebras, we need to have a way in which
to relate an algebraic specification to
such a structure.
To this end we define an interpretation
which maps closed terms formed by following the rules
given in the specification to elements of the structure .
We may extend the interpretation eval to include
variables as well (which we write
as ), but then we also need to assume
that an assignment is given,
such that when applying to a term t the result
is free of variables, otherwise no interpretation in exists.
See slide [8-algebra].
Interpretations
As an example, consider the interpretations
of the specification of Bool and
the specification of Nat, given in slide [8-B-N].
slide: Interpretations of Bool and Nat
The structure given above is simply a boolean algebra, with
the operators , and .
The functions not, and and or naturally map to their semantic counterparts.
In addition, we assume that the constants true and false map
to the elements tt and ff.
As another example, look at the structure and the
interpretation , which maps
the functions S, mul and plus specified
in Nat in a natural way.
However, since we have also given equations for Nat
(specifying how to eliminate the functions mul and plus)
we must take precautions such that the requirement
is satisfied if the structure is to count as an adequate
model of Nat.
The requirement above states
that whenever equality holds for two interpreted
terms (in ) then these terms must also be provably equal
(by using the equations given in the specification of Nat),
and vice versa.
As we will see illustrated later, many models may exist
for a single specification, all satisfying the requirement
of adequacy.
The question is, do we have a means to select one of these
models as (in a certain sense) the best model.
The answer is yes.
These are the models called initial models.
Initial models
A model (in a mathematical sense) represents the meaning
of a specification in a precise way.
A model may be regarded as stating a commitment with
respect to the interpretation of the specification.
An initial model is intuitively the least committing
model, least committing in the sense
that it imposes only identifications made necessary by
the equational theory of a specification.
Technically, an initial model is a model from which
every other model can be derived by an algebraic mapping
which is a homomorphism.
slide: Initial models
The starting point for the construction of an initial model
for a given specification with signature is to
construct a term algebra with
the terms that may be generated from the signature
as elements.
The next step is then to factor the universe of generated
terms into equivalence classes, such that
two terms belong to the same class if they can be proven equivalent
with respect to the equational theory of the specification.
We will denote the representative of the equivalence class to
which a term t belongs by .
Hence (in the model) iff .
So assume that we have constructed a structure
then; finally, we must define an interpretation,
say , that assigns closed
terms to appropriate terms in the term model
(namely the representatives of the equivalence class of that term).
Hence, the interpretation of a function f in the structure
is such that
where is the interpretation of f in .
In other words, the result of applying to terms
belongs to the same equivalence class as the result
of applying to the representatives of the
equivalence classes of .
See slide [8-initial].
An initial algebra model has two important properties,
known respectively as the no junk and no confusion
properties.
The no junk property states that for each element of the
model there is some term for which the interpretation in
is equal to that element.
(For the model this is simply a representative of
the equivalence class corresponding with the element.)
The no confusion property states that if equality of two
terms can be proven in the equational theory of the specification,
then the equality also holds (semantically) in the model,
and vice versa.
The no confusion property means, in other words, that
sufficiently many identifications are made
(namely those that may be proven to hold),
but no more than that (that is, no other than those
for which a proof exists).
The latter property is why we may speak of an initial model
as the least committing model; it simply gives
no more meaning than is strictly needed.
The initial model constructed from the term algebra
of a signature is intuitively a very natural model
since it corresponds directly with (a subset of) the
generator universe of .
Given such a model, other models may be derived from it
simply by specifying an appropriate interpretation.
For example, when we construct a model for the natural
numbers (as specified by Nat) consisting of the
generator universe
and the operators
(which are defined as ,
and )
we may simply derive from this model
the structure
for which the operations have their standard arithmetical
meaning.
Actually, this structure is also an initial model for
Nat, since we may also make the inverse transformation.
More generally, when defining an initial model only
the structural aspects
(characterizing the behavior of the operators) are
important, not the actual contents.
Technically, this means that initial models
are defined up to isomorphism, that is a mapping to
equivalent models with perhaps different contents
but an identical structure.
Not in all cases is a structure derived from an initial model itself
also an initial model, as shown in the example below.
Example
Consider the specification of Bool as given before.
For this specification we have given the structure
and the interpretation which defines
an initial model for Bool. (Check this!)
slide: Structure and interpretation
We may, however, also use the structure to
define an interpretation of Nat.
See slide [8-structure].
The interpretation is such
that
,
,
and
,
where .
The reader may wish to ponder on what this interpretation
effects.
The answer is that it interprets Nat as
specifying the naturals modulo 2, which discriminates
only between odd and even numbers.
Clearly, this interpretation defines not an initial model,
since it identifies all odd numbers with ff and all even numbers with tt.
Even if we replace ff by 0 and tt by 1, this is not what we
generally would like to commit ourselves to when we speak
about the natural numbers,
simply because it assigns too much meaning.
Objects as algebras
Instructor's Guide
intro,
types,
algebra,
modules,
classes,
summary,
Q/A,
literature
The types for which we have thus far seen algebraic specifications
(including Bool, Seq, Set and Nat)
are all types of a mathematical kind,
which (by virtue of being mathematical) define
operations without side-effects.
Dynamic state changes, that is side-effects,
are often mentioned as determining the characteristics
of objects in general.
In the following we will explore how we may deal with
assigning meaning to dynamic state changes in an algebraic framework.
Let us look first at the abstract data type stack.
The type stack may be considered as one of the `real life'
types in the world of programming.
See slide [8-Stack].
Abstract Data Type -- applicative
Stack
functions
new : stack;
push : element * stack -> stack;
empty : stack -> boolean;
pop : stack -> stack;
top : stack -> element;
axioms
empty( new ) = true
empty( push(x,s) ) = false
top( push(x,s) ) = x
pop( push(x,s) ) = s
preconditions
pre: pop( s : stack ) = not empty(s)
pre: top( s : stack ) = not empty(s)
end
slide: The ADT Stack
Above, a stack has been specified by giving a signature
(consisting of the functions new, push, empty,
pop and top).
In addition to the axioms characterizing the behavior of the stack,
we have included two pre-conditions
to test whether the stack is empty in case pop
or top is applied.
The pre-conditions result in conditional
axioms for the operations pop and top.
Conditional axioms, however, do preserve
the initial algebra semantics.
The specification given above is a maximally
abstract description of the behavior of a stack.
Adding more implementation detail would disrupt
its nice applicative structure, without necessarily
resulting in different behavior (from a sufficiently
abstract perspective).
The behavior of elements of abstract data types
and objects is characterized by state changes.
State changes may affect the value delivered
by observers or methods.
Many state changes (such as the growing or shrinking of a
set, sequence or stack) really are nothing but applicative
transformations that may mathematically be described by
the input-output behavior of an appropriate function.
An example in which the value of an object
on some attribute is dependent on the history
of the operations applied to the object, instead of
the structure of the object itself (as in the case
of a stack) is the object account, as specified in
slide [8-account].
The example is taken from [Goguen].
Dynamic state changes -- objects
account
object account is
functions
bal : account -> money
methods
credit : account * money -> account
debit : account * money -> account
error
overdraw : money -> money
axioms
bal(new(A)) = 0
bal(credit(A,M)) = bal(A) + M
bal(debit(A,M)) = bal(A) - M if bal(A) >= M
error-axioms
bal(debit(A,M)) = overdraw(M) if bal(A) < M
end
slide: The algebraic specification of an account
An account object has one attribute function
(called bal) that delivers the amount of money
that is (still) in the account.
In addition, there are two method functions,
credit and debit that may respectively be used
to add or withdraw money from the account.
Finally, there is one special error function, overdraw,
that is used to define the result of balance
when there is not enough money left to grant
a debit request.
Error axioms are needed whenever
the proper axioms are stated conditionally,
that is contain an if expression.
The conditional parts of the axioms, including the
error axioms, must cover all possible cases.
Now, first look at the form of the axioms.
The axioms are specified as
where fn specifies an attribute function
(bal in the case of account) and method a method
(either new, which is used to create new accounts,
credit or debit).
By convention, we assume that ,
that is that a method function returns its first argument.
Applying a method thus results in redefining the value
of the function fn.
For example, invoking the method
for the account acc
results in modifying the function bal
to deliver the value instead of simply .
In the example above, the axioms define the meaning of the function bal
with respect to the possible method applications.
It is not difficult to see
that these operations are of a non-applicative nature,
non-applicative in the sense that each time a method is
invoked the actual definition of bal is changed.
The change is necessary because, in contrast
to, for example, the functions employed in a
boolean algebra, the actual value of the account
may change in time in a completely arbitrary way.
A first order framework of (multi sorted) algebras
is not sufficiently strong to define the meaning of such changes.
What we need may be characterized as a multiple world semantics,
where each world corresponds to a possible state of the account.
As an alternative semantics we will also discuss
the interpretation of an object as an abstract machine,
which resembles an (initial) algebra with hidden sorts.
Multiple world semantics
From a semantic perspective, an object that changes its state
may be regarded as moving from one world to another,
when we see a world as representing a particular
state of affairs.
Take for example an arbitrary (say John's) account,
which has a balance of 500.
We may express this as .
Now, when we invoke the method credit, as
in ,
then we expect the balance of the account to be raised
to 700.
In the language of the specification, this is expressed as
Semantically, the result is a state of affairs in which
.
In [Goguen] an operational interpretation is given
of a multiple world semantics by introducing a
database D (that stores the values of the
attribute functions of objects as first order terms)
which is transformed as the result of invoking
a method, into a new database (that has an updated
value for the attribute function modified by the method).
The meaning of each database (or world) may be characterized
by an algebra and an interpretation as before.
The rules according to which transformations on a database
take place may be formulated as in slide [8-multiple].
slide: The interpretation of change
The first rule (attribute) describes how attribute functions
are evaluated.
Whenever a function f with arguments
evaluates to a value (or expression) v, then the term
may be replaced by v without affecting the database D.
(We have simplified the treatment by omitting all aspects having
to do with matching and substitutions,
since such details are not needed to understand the process
of symbolic evaluation in a multiple world context.)
The next rule (method) describes the result of
evaluating a method. We assume that invoking the method
changes the database D into .
Recall that, by convention, a method returns its first
argument.
Finally, the last rule (composition) describes how we may glue all
this together.
No doubt, the reader needs an example to get a picture
of how this machinery actually works.
Example - a counter object
object ctr is ctr
function n : ctr -> nat
method incr : ctr -> ctr
axioms
n(new(C)) = 0
n(incr(C)) = n(C) + 1
end
slide: The object ctr
In slide [8-ctr], we have specified a simple object ctr with
an attribute function value
(delivering the value of the counter)
and a method function incr
(that may be used to increment the value of the counter).
Abstract evaluation
<n(incr(incr(new(C)))),{ C }> -[new]->
<n(incr(incr(C))),{ C[n:=0] }> -[incr]->
<n(incr(C)),{ C[n:=1] }> -[incr]->
<n(C), { C[n:=2] }> -[n]->
<2, { C[n:=2] }>
slide: An example of abstract evaluation
The end result of the evaluation depicted in slide [8-ctr-evl] is
the value 2 and a context (or database) in which the
value of the counter C is (also) 2.
The database is modified in each step
in which the method incr is applied.
When the attribute function value is evaluated
the database remains unchanged, since it is merely
consulted.
Objects as abstract machines
Multiple world semantics provide a very
powerful framework in which to define the meaning of object
specifications.
Yet, as illustrated above, the reasoning involved
has a very operational flavor and lacks the appealing
simplicity of the initial algebra semantics
given for abstract data types.
As an alternative, [Goguen] propose an interpretation
of objects (with dynamic state changes) as
abstract machines.
Recall that an initial algebra semantics defines
a model in which the elements are equivalence classes
representing the abstract values of the data type.
In effect, initial models are defined only up to
isomorphism (that is, structural equivalence with
similar models).
In essence, the framework of initial algebra semantics
allows us to abstract from the particular representation
of a data type, when assigning meaning to a specification.
From this perspective it does not matter, for example, whether
integers are represented in binary or decimal notation.
The notion of abstract machines generalizes
the notion of initial algebras in that it loosens the
requirement of (structural) isomorphism,
to allow for what we may call behavioral equivalence.
The idea underlying the notion of behavioral equivalence
is to make a distinction between visible sorts
and hidden sorts and to look only at the
visible sorts to determine whether two algebras and
are behaviorally equivalent.
According to [Goguen], two algebras and
are behaviorally equivalent if and only if the result
of evaluating any expression of a visible sort in
is the same as the result of evaluating that expression
in .
Now, an abstract machine
(in the sense of Goguen and Meseguer, 1986)
is simply the equivalence class of behaviorally equivalent
algebras, or in other words the maximally abstract
characterization of the visible behavior of
an abstract data type with (hidden) states.
The notion of abstract machines is of particular relevance
as a formal framework to characterize the (implementation)
refinement relation between objects.
For example, it is easy to determine that the behavior
of a stack implemented as a list is equivalent
to the behavior of a stack implemented by a pointer array,
whereas these objects are clearly not equivalent
from a structural point of view.
Moreover, the behavior of both conform (in an abstract sense)
with the behavior specified in an algebraic way.
Together, the notions of abstract machine and behavioral equivalence
provide a formalization of the notion of information hiding
in an algebraic setting.
In the chapters that follow we will look at alternative
formalisms to explain information hiding, polymorphism
and behavioral refinement.
(C) Æliens
04/09/2009
You may not copy or print any of this material without explicit permission of the author or the publisher.
In case of other copyright issues, contact the author.