\label{des/per/concept}
One of the reasons for employing parallelism in a program is
the need for efficiency.
Preferably, the parallelism remains implicit,
which means that the compiler takes care of how
the concurrent execution of the program will take
place.
However, realizing implicit parallelism is in practice quite difficult
and in most cases the programmer will have to indicate explicitly how
to execute the program concurrently.
Another, equally valid motivation for employing parallelism
is that it better fits the conceptual structure of the problem
to be solved.
As an example, to synchronize the behavior of objects it may be worthwhile
to allow objects to have activity of their own. C.f. [Pe89] and [Bro86].
\parindex{geographical distribution}
As an additional aspect, the processes that are involved in a concurrent computation may be
geographically distributed,
either because they need resources residing on distant processors or to
attain an increase in execution speed. See [CDJ84].
Programming languages that allow the computation to be spread over a number of distinct
processors connected by a network are called {\em distributed
programming languages}.
Computation models
\parindex{computation models}
\disindex{computation models}
When classifying distributed programming languages we can distinguish between
three distinct underlying computation models.
The most basic of these is that of communicating sequential processes,
first presented in the influential paper of [Ho78].
Object-based concurrent languages may be regarded as
extending this basic model, by
generalizing communication and
providing additional features for synchronization
and protection.
Finally, the model underlying concurrent logic programming languages
is perhaps the most flexible of these,
since it allows to mimick the two
previous ones.
The languages that we will refer to in this section
are listed in the table below.
.so tl
Distributed programming languages may differ in what
is employed as the unit of parallelism,
the way they deal with communication and how partial failures
are handled.
In [Ba89] an overview is given of a number of distributed programming languages,
discussing the choices made with regard to these dimensions.
Parallelism
\disindex{unit of parallelism}
There seems to be abundant choice in what to take as the unit
of parallelism, to name a few: processes (CSP), tasks (Ada),
active objects (POOL), multi-threaded objects (Emerald),
clauses (Concurrent Prolog and Parlog), or even statements (Occam).
With respect to the granularity of computation, we encounter
Concurrent Prolog and Parlog on the side of the spectrum of
languages supporting small-grain parallelism
and Ada or POOL on the other side,
supporting large-grain parallelism.
Large-grain parallelism means that, proportionally, the amount of
computation significantly exceeds the time spent communicating
with other processes.
Another important issue is whether a language supports the allocation of processes
to processors.
Allocation is supported for instance by POOL and Occam.
Communication
\disindex{communication}
Another decision that must be made concerns the way
communication is dealt with.
As alternatives we encounter data sharing and message passing.
We mention Linda as an interesting example
of data sharing.\ftn{
The apparent contradiction between distribution and data sharing is resolved by making a distinction
between physical data sharing and logical
data sharing. Obviously, we mean the latter here.
Logical data sharing provides the programmer with the illusion
of common data by hiding the physical distribution of the data.
}
Also Concurrent Prolog and Parlog,
utilizing shared logical variables as the medium of communication,
deserve to be classified among the distributed languages.
Choosing for message passing we may employ point to point
connections (CSP), channels (Occam, Delta Prolog) or broadcasting.
Communication may to a certain extent be non-deterministic.
For example, both the select statement of Ada
and the guarded Horn clauses of Concurrent Prolog and Parlog
result in a choice for a particular communication,
ignoring alternatives.
Failures
\disindex{failures}
\disindex{partial failure}
As an additional feature, some of the languages mentioned
in [Ba89] handle partial failure by offering
exceptions, atomic sections or recovery mechanisms.
Failure may be due to, for example,
hardware errors or the violation of integrity constraints.
We wish to remark that such failures are rather different
from the failure encountered in a language such as Prolog.
Failure in Prolog is one of the possible outcomes of a computation;
it may even be used to generate all the solutions to
a particular goal.
\nop{
Computation models
\parindex{computation models}
\disindex{computation models}
When classifying the languages mentioned we can distinguish between
three computation models
underlying distributed programming.
The most basic of these is that of communicating sequential processes,
first presented in the influential paper [Ho78].
Object based concurrent languages may be regarded as
extending this basic model, by
generalizing communication and
providing additional features for synchronization
and protection.
Finally, the model underlying concurrent logic programming languages
is perhaps the most flexible of these,
since it allows to mimick the two
previous ones.
}