AudioClip {
exposedField SFString description ""
exposedField SFBool loop FALSE
exposedField SFFloat pitch 1.0 # (0,
)
exposedField SFTime startTime 0 # (-
,
)
exposedField SFTime stopTime 0 # (-
,
)
exposedField MFString url []
eventOut SFTime duration_changed
eventOut SFBool isActive
}
An AudioClip node specifies audio data that can be referenced by other
nodes that require an audio source.
TIP:
The Sound node is the only node in
VRML 2.0 that uses an audio source, and the AudioClip node is
specified in the Sound's source field. |
The description field specifies a textual description of the
audio source. A browser is not required to display the description
field but may choose to do so in addition to playing the sound.
The url field specifies the URL from which the sound is loaded.
Browsers shall support at least the wavefile format in uncompressed
PCM format (see [WAV]), It is recommended
that browsers also support the MIDI file type 1 sound format (see [MIDI]),
MIDI files are presumed to use the General MIDI patch set. Section "2.5 VRML and the World Wide Web"
contains details on the url field. Results are not defined when
the URL references unsupported data types.
TECHNICAL
NOTE:
A very small number of formats are required or recommended by
the VRML specification so that content creators can create worlds
that should work with any VRML implementation. Several criteria
are used to decide which audio (and movie and texture) formats
VRML implementations should be required to support:
- The
format must be free of legal restrictions on its use (either
creation or playback).
- It must
be well documented, preferably by a standards group independent
of any one company.
- There
must be implementations available on multiple platforms and
there must be implementations available on the most popular
platforms (Mac, PC, and UNIX).
- It should
already be widely used on the Web and widely supported by
content -creation tools. In addition, if there are multiple
formats that meet all of the requirements but have very similar
functionality, only one is required. Deciding which is "best"
is often very difficult, but fortunately VRML implementors
are motivated to listen to their customers and are free to
support any format they wish.
In the
particular case of audio, uncompressed .wav files were chosen
because they met all of these criteria. Several different
forms of compression for .wav files are available, but at
the time VRML 2.0 was being designed, none were available
nor widely used on all platforms. MIDI is recommended as a
very bandwidth-efficient way of transmitting musical information
and complements the more general (but much larger) .wav format
nicely.
|
The loop, startTime, and stopTime exposedFields and
the isActive eventOut, and their effects on the AudioClip node,
are discussed in detail in "2.6.9 Time-dependent
nodes." The "cycle" of an AudioClip is the length of time
in seconds for one playing of the audio at the specified pitch.
The pitch field specifies a multiplier for the rate at which
sampled sound is played. Only positive values shall be valid for pitch.
A value of zero or less will produce undefined results. Changing the
pitch field affects both the pitch and playback speed of a sound.
A set_pitch event to an active AudioClip is ignored and no pitch_changed
eventOut is generated. If pitch is set to 2.0, the sound shall
be played one octave higher than normal and played twice as fast. For
a sampled sound, the pitch field alters the sampling rate at
which the sound is played. The proper implementation of pitch control
for MIDI (or other note sequence sound clips) is to multiply the tempo
of the playback by the pitch value and adjust the MIDI Coarse
Tune and Fine Tune controls to achieve the proper pitch change.
TECHNICAL
NOTE: There are a large number of parameters that can be used
to alter an audio sound track. VRML97 allows only the pitch and
volume (which is specified in the intensity field of the
Sound node) to be modified. This gives the world creator a lot
of flexibility with a minimal number of "knobs" to tweak, making
implementation reasonably easy. |
A duration_changed event is sent whenever there is a new value
for the "normal" duration of the clip. Typically, this will only occur
when the current url in use changes and the sound data has been
loaded, indicating that the clip is playing a different sound source.
The duration is the length of time in seconds for one cycle of the audio
for a pitch set to 1.0. Changing the pitch field will
not trigger a duration_changed event. A duration value of "-1"
implies that the sound data has not yet loaded or the value is unavailable
for some reason.
The isActive eventOut can be used by other nodes to determine
if the clip is currently active. If an AudioClip is active, it shall
be playing the sound corresponding to the sound time (i.e., in
the sound's local time system with sample 0 at time 0):
t = (now - startTime) modulo (duration / pitch)
TECHNICAL
NOTE: You can think of AudioClip as the sound-generation equipment,
while the Sound node functions as the sound-emitting equipment.
AudioClip has all of the controls for starting and stopping the
sound, looping it, and so forth. The Sound node controls how the
sound is emitted--what volume, where in space, and so on. A single
AudioClip can be used with several different Sound nodes, just
like a single tape player might be connected to several sets of
speakers. |
TIP:
Be careful with how many audio tracks are playing simultaneously.
Read the browser release notes carefully to discover how many
tracks are supported simultaneously. It is generally safe to
limit the number of audio tracks to two or three at one time.
Use ProximitySensors and the min/maxFront and
min/maxBack fields of the Sound node to localize
sounds to nonoverlapping regions.
|
EXAMPLE
(click to run):
The following example creates two Sound nodes that employ AudioClip
nodes. The first AudioClip is used for a repeating (loop
TRUE) sound that emits from the center of the world. This example
illustrates the case of a sound that is looping forever, starting
when the user first enters the world. This is done by setting
the loop field to TRUE and leaving the stopTime
equal to the startTime (default for both is zero). The
second AudioClip is issued whenever the user enters or exits
the box defined by the ProximitySensor:
#VRML V2.0 utf8
Group { children [
Sound { # Looped midi soundtrack
source DEF AC1 AudioClip {
loop TRUE # Loop forever
url "doodoo.wav"
}
spatialize TRUE
minFront 0
maxFront 20
minBack 0
maxBack 20
}
Sound { # Chimes when user goes through space near origin
source DEF AC2 AudioClip { url "Chimes.wav" }
minFront 20
maxFront 100
minBack 20
maxBack 100
}
DEF PS ProximitySensor { center 0 5 0 size 10 10 10 }
Shape {
geometry Box { size 5 0.05 5 }
appearance Appearance { material Material {} }
}
Shape { # Floor
geometry IndexedFaceSet {
coord Coordinate {
point [ -50 0 -50, -50 0 50, 50 0 50, 50 0 -50 ]
}
coordIndex [ 0 1 2 3 ]
}
}
Viewpoint {
position 0 1 25
description "Outside sound ranges"
}
Viewpoint {
position 0 1 2
description "Inside sound ranges"
}
]}
# Sound bell when user enters/exits 10x10x10 space nr origin
ROUTE PS.enterTime TO AC2.set_startTime
ROUTE PS.exitTime TO AC2.set_startTime
|