Structure versus presentation -- hypermedia reference models
Hypermedia technology supports the organization of a variety of material into an associative structure.
In addition, hierarchical structuring facilities may be supported.
The basic notions underlying the structuring
facilities of hypertext have been expressed
in the Dexter hypertext model.
See [Halasz94].
Hypertext model -- documents
Dexter
- components, links and anchors
Component
- content
-- text, graphics, video, program
- attributes
-- semantic description
- anchors
-- links to other documents
- presentation
-- display characteristics
Compound
- children
-- subcomponents
slide: The Dexter hypertext reference model
The Dexter model explains the structure
of hypertext documents in terms
of components, links and
anchors.
The notion of anchors is introduced
to explain how to attach a link
to either a source or destination
component.
An anchor is the indication
of a particular spot in the document,
or rather a component thereof, usually
identified by some coordinate.
The Dexter model distinguishes
between three layers in a hypertext system,
namely the document layer
(defining the content and structure
of documents),
a storage layer
(handling the storage and retrieval of
components and links)
and a presentation layer
(handling the display of documents
and the interaction with the user).
A component,
which is a part of a document is
characterized by the following features:
content
(which may be text, graphics, audio,
video or even a program),
attributes
(which give a semantic description
that may be used for retrieval or
selective display),
anchors
(which identify the places to which
a link is attached), and
presentation characteristics
(which determine the display of the
component).
In addition, for compound components,
a feature children may be defined, for
storing the list of subcomponents.
Multimedia
The original Dexter hypertext model is
strongly oriented towards text,
despite the provision for multimedia
content.
Multimedia, in particular audio and video,
are intrinsically time-based
and require temporal primitives to synchronize
the presentation of material from different
sources.
In the CMIF multimedia model
described in [Hardman94], channels
(which are abstract output devices) are
introduced to allow for the specification
of abstract timing constraints
by means of synchronization arcs between channels.
Multimedia model -- composition
CMIF
- data block -- atomic component
- channel -- abstract output device
- synchronization arc -- specifying timing constraints
- event -- actual presentation
Views
-- authoring
- structure -- sequential and parallel composition
- channels -- presentation
slide: The CMIF multimedia model
The notion of channels provides the abstraction
needed to separate the contents of a presentation
from the actual display characteristics.
For example, text may be output through
a text channel while, simultaneously,
video may be output through a video channel.
The screen layout and allocation of these
channels may be determined independently.
The actual presentation is determined by
events,
that may either arise as the result of a user
action or as the result of the activation of
a synchronization arc.
For example, a synchronization constraint
may specify that an audio fragment containing
speech must be started 10 seconds after the beginning
of a video sequence.
Then, after 10 seconds, the video channel will issue
an event that causes the audio channel to start
presenting its contents.
The CMIF model has been developed to allow for
portable multimedia documents.
In particular, the notion of channels allows
for a platform-independent characterization
of presentation characteristics and timing constraints.
An important characteristic of the model,
from an authoring perspective, is that it supports
a compositional approach to authoring,
since it allows us to compose a channel
(specifying a sequential composition of components)
with arbitrary many other channels, in parallel.
In [Hardman94], an extension of the CMIF multimedia
model is developed to incorporate the associative
structuring facilities defined by the Dexter
hypertext model.
}
Hypermedia model -- components
- contents -- data block
- attributes -- semantic information
- anchors {\em -- $(id, value)
Compound
- children {\em -- $( component, start-time }
- synchronization {\em -- $()
slide: A hypermedia reference model
In the combined model, a single component
consists of contents
containing the actual data blocks of the component,
attributes
that specify semantic information,
a list of anchors
(each specifying a symbolic name and a value,
which in the case of an audio or video
fragment is its time measured from the start),
and presentation characteristics,
which include
the specification of a channel and the
duration of the component.
As in the Dexter model, compound components
may have children attributes,
specifying for each child a component and its
start-time,
and a number of synchronization arcs,
each
specifying
a source (component and anchor)
and destination (component and anchor).
Synchronization arcs may cross channel boundaries.
The reader is encouraged to specify a
more detailed object model, based on the outline
given above.
Evidently, the incorporation of a variety
of content types and display channels is a serious
challenge.
In particular, the notion of time-based active objects
will probably be difficult to handle.
For an abstract characterization of active time-based
(media) object, the reader is referred to [Nierstrasz].
On the notion of links -- active documents
Hypermedia documents are
often referred to as
hyperdocuments,
because of their associative structure
imposed by (hyper) links.
Links, in general, may be characterized
as a possibly conditional connection
between a source anchor
and destination anchor.
There has been an ongoing discussion as to
whether links must lead from byte to byte
or whether they must be defined
at some higher level.
On closer inspection, there appear
to be a number of choices
with respect to the kind of links
that may be supported.
See, for example, Halasz (1988, 1991).
Links
-- anchors
- << source, conditions, destination >>
World Wide Web -- distributed hypermedia
- information retrieval
-- HTML
Active documents
slide: Links and activation
Perhaps the most important
distinction is that between
hard-wired links that act as a goto
in programming languages and
what may be called virtual links,
the destination of which is computed
when activating the source anchor.
This distinction is exemplified
in the World Wide Web (WWW)
distributed hypermedia system,
which was
initiated
by CERN (Switzerland).
The World Wide Web supports HTML
(HyperText Markup Language), a semi-official
hypermedia markup language in the SGML tradition.
The World Wide Web allows the user
to locate and retrieve documents
worldwide across the Internet.
However, a document may either be
stored physically somewhere on a
node in the network or may be generated
on the fly by some information retrieval
server producing HTML output.
The production of HTML documents
by some external program
as the result of accessing a link
somehow blurs the distinction between
programs and documents.
One step further in this direction is to allow
documents, whether stored or generated, to contain
embedded code that is executed
when the document is viewed.
Such documents may be characterized
as active documents.
Active documents are, for example,
proposed as an extension to the
MIME (Multipurpose Internet Mail)
standard, to allow
for `live mail'.
Embedding code in documents would allow
for synchronization facilities that
are currently beyond the scope of HTML
and MIME.
However, a standard (in development)
that provides features for synchronization
is the HyTime markup language,
which is another offspring of the
SGML family.
Summarizing,
active documents
are documents that result in
programmed actions
by being displayed.
From a systems programming point of view,
we may regard active documents as
program scripts that are executed
by a (hypermedia) interpreter.
(A well-know example of a
script-based hypermedia programming language is HyperTalk.)
Hypermedia programming, using scripts,
relies intrinsically on an event-driven
control mechanism.
In the following section, we will
explore how we may combine script-based
(event-driven) programming
with (more traditional) object-oriented
development (in C++).