Chapter 3:
Node Reference
Intro
Anchor
Appearance
AudioClip
Background
Billboard
Box
Collision
Color
ColorInterpolator
Cone
Coordinate
CoordinateInterpolator
Cylinder
CylinderSensor
DirectionalLight
ElevationGrid
Extrusion
Fog
FontStyle
Group
ImageTexture
IndexedFaceSet
IndexedLineSet
Inline
LOD
Material
MovieTexture
NavigationInfo
Normal
NormalInterpolator
OrientationInterpolator
PixelTexture
PlaneSensor
PointLight
PointSet
PositionInterpolator
ProximitySensor
ScalarInterpolator
Script
Shape
Sound
Sphere
SphereSensor
SpotLight
Switch
Text
TextureCoordinate
TextureTransform
TimeSensor
TouchSensor
Transform
Viewpoint
VisibilitySensor
WorldInfo
|
ProximitySensor {
exposedField SFVec3f center 0 0 0 # (- , )
exposedField SFVec3f size 0 0 0 # [0, )
exposedField SFBool enabled TRUE
eventOut SFBool isActive
eventOut SFVec3f position_changed
eventOut SFRotation orientation_changed
eventOut SFTime enterTime
eventOut SFTime exitTime
}
The ProximitySensor node generates events when the viewer enters,
exits, and moves within a region in space (defined by a box). A proximity
sensor is enabled or disabled by sending it an enabled event
with a value of TRUE or FALSE. A disabled sensor does not send events.
TIP:
Earlier drafts of the specification had two kinds of proximity sensors,
BoxProximitySensor and SphereProximitySensor. Only the box version
made the final specification because axis-aligned boxes are used
in other places in the specification (bounding box fields of grouping
nodes), because they are more common than spheres, and because SphereProximitySensor
functionality can be created using a Script and a BoxProximitySensor.
The BoxProximitySensor must be large enough to enclose the sphere,
and the Script just filters the events that come from the box region,
passing along only events that occur inside the sphere (generating
appropriate enter and exit events, etc.). This same technique can
be used if you need to sense the viewer's relationship to any arbitrarily
shaped region of space. Just find the box that encloses the region
and write a script that throws out events in the uninteresting regions.
|
A ProximitySensor node generates isActive TRUE/FALSE events
as the viewer enters and exits the rectangular box defined by its
center and size fields. Browsers shall interpolate
viewer positions and timestamp the isActive events with the
exact time the viewer first intersected the proximity region. The
center field defines the centre point of the proximity region
in object space. The size field specifies a vector which
defines the width (x), height (y), and depth (z) of the box bounding
the region. The components of the size field shall be >= 0.0.
ProximitySensor nodes are affected by the hierarchical transformations
of their parents.
TECHNICAL
NOTE: Browsers move the camera in discrete steps, usually
one step per frame rendered when the user is moving. How often
the browser renders frames (whether ten frames per second
or 60 frames per second) varies depending on how fast the
computer is on which it is running and so on. It is important
that content creators be able to depend on accurate times
from ProximitySensors, which is why it is important that implementations
interpolate between sampled user positions to calculate ProximitySensor
enter and exit times. For example, you might create a "speed
trap" that measures how fast the user moves between two points
in the world (and gives the user a virtual speeding ticket
if they are moving too quickly). This is easy to accomplish
using two ProximitySensors and a Script that takes the two
sensors' enterTimes and determines the user's speed
as speed = distance / (enterTime1 - enterTime2). This should
work even if the sensors are close together and the user is
moving fast enough to travel through both of them during one
frame, and it will work if the implementation performs the
correct interpolation calculation.
If both
the user and the ProximitySensor are moving, calculating the
precise, theoretical time of intersection can be almost impossible.
The VRML specification does not require perfection--implementations
are expected only to do the best they can. A reasonable strategy
is to simulate the motion of the ProximitySensors first, and
then calculate the exact intersection of the user's previous
and current position against the final position of the sensor.
That will give perfect results when just the user is moving,
and will give very good results even when both the user and
the sensor are moving.
|
The enterTime event is generated whenever the isActive
TRUE event is generated (user enters the box), and exitTime events
are generated whenever an isActive FALSE event is generated (user
exits the box).
The position_changed and orientation_changed eventOuts
send events whenever the user is contained within the proximity region
and the position and orientation of the viewer changes with respect
to the ProximitySensor node's coordinate system including enter and
exit times. The viewer movement may be a result of a variety of circumstances
resulting from browser navigation, ProximitySensor node's coordinate
system changes, or bound Viewpoint node's position or orientation changes.
Each ProximitySensor node behaves independently of all other ProximitySensor
nodes. Every enabled ProximitySensor node that is affected by the viewer's
movement receives and sends events, possibly resulting in multiple ProximitySensor
nodes receiving and sending events simultaneously. Unlike TouchSensor
nodes, there is no notion of a ProximitySensor node lower in the scene
graph "grabbing" events.
Instanced (DEF/USE) ProximitySensor nodes use the union of
all the boxes to check for enter and exit. A multiply instanced ProximitySensor
node will detect enter and exit for all instances of the box and send
set of enter/exit events appropriately. However, if the any of the boxes
of a multiply instanced ProximitySensor node overlap, results are undefined.
TECHNICAL
NOTE: Instancing a ProximitySensor makes it sense a series
of box-shaped regions instead of a single box-shaped region. Results
are still well defined, as long as the various instances do not
overlap. Results are undefined for viewer movement in the overlapping
region. For example, this instanced ProximitySensor overlaps in
the unit cube around the origin and results are undefined for
position_changed and orientation_changed events generated
in that region:
Transform {
translation 0 1 0
children DEF P ProximitySensor {
size 1 2 1
}
}
Transform {
translation 0 -1 0
children USE P
}
|
A ProximitySensor
node that surrounds the entire world has an enterTime equal
to the time that the world was entered and can be used to start up
animations or behaviours as soon as a world is loaded. A ProximitySensor
node with a box containing zero volume (i.e., any size field
element of 0.0) cannot generate events. This is equivalent to setting
the enabled field to FALSE.
TIP:
An unanticipated use for ProximitySensors is creating "dashboard"
geometry that stays in a fixed position on the computer's
screen. Putting a ProximitySensor and a Transform node in
the same coordinate system and routing the sensor's position_changed
and orientation_changed eventOuts to the Transform's
set_translation and set_rotation eventIns, like
this
Group {
children [
DEF PS ProximitySensor { size ... }
DEF T Transform { children [...dashboard geom...]}
]
ROUTE PS.position_changed TO T.set_translation
ROUTE PS.orientation_changed TO T.set_rotation
}
will
make the Transform follow the viewer. Any geometry underneath
the Transform will therefore stay fixed with respect to the
viewer.
There
are a couple of potential problems with this solution. First,
you must decide on a size for the ProximitySensor. If you
want your dashboard to be visible anywhere in your world,
you must make the ProximitySensor at least as large as your
world. If you don't care about your world being composed into
a larger world, just give the Proximity-Sensor a huge size
(e.g., size 1e25 1e25 1e25).
Second,
precise placement of geometry on the screen is only possible
if you know the dimensions of the window into which the VRML
browser is rendering and the viewer's field-of-view. A preferred
field-of-view can be specified in the Viewpoint node, but
the VRML specification provides no way to set the dimensions
of the browser's window. Instead, you must use the HTML <EMBED>
or <OBJECT> tags to specify the window's dimensions
and put the VRML world inside an HTML Web page.
Finally,
usually it is desirable for dashboard geometry to always appear
on top of other geometry in the scene. This must be done by
putting the dashboard geometry inside the empty space between
the viewer's eye and the navigation collision radius (set
using a NavigationInfo node). Geometry put there should always
be on top of any geometry in the scene, since the viewer shouldn't
be able to get closer than the collision radius to any scene
geometry. However, putting geometry too close to the viewer's
eye causes the implementation problem known as "z-buffer tearing,"
so it is recommended that you put any dashboard geometry between
half the collision radius and the collision radius. For example,
if the collision radius is 0.1 m (10 cm), place dashboard
geometry between 5 and 10 cm away from the viewer (and, of
course, the dashboard geometry should be underneath a Collision
group that turns off collisions with the dashboard).
|
TECHNICAL
NOTE: ProximitySensor started as a simple feature designed
for a few simple uses, but turned out to be a very powerful feature
useful for a surprisingly wide variety of tasks. ProximitySensors
were first added to VRML 2.0 as a simple trigger for tasks like
opening a door or raising a platform when the user arrived at
a certain location in the world. The ProximitySensor design had
only the isActive SFBool eventOut (and the center
and size fields to describe the location and size of the
region of interest).
Just knowing
whether or not viewers are in a region of space is very useful,
but sometimes it is desirable to know exactly where viewers
enter the space or the orientation of viewers when they enter
the space. You might want to create a doorway that only opens
if viewers approach it facing forward (and stays shut if the
users back into it), for example. The position_changed
and orientation_changed events were added to give this
information, but were defined to generate events only when the
isActive eventOut generated events--when a viewer entered
or exited the region. While the ProximitySensor design was being
revised, two other commonly requested features were being designed:
allowing a Script to find out the current position and orientation
of the viewer, and notifying a Script when the viewer moves.
The obvious
solution to the first problem is to provide getCurrentPosition()/getCurrentOrientation()
methods that a Script could call at any time to find out the
current position and orientation of the viewer. The problem
with this solution is that Script nodes are not necessarily
part of the scene hierarchy and so are not necessarily defined
in any particular coordinate system. For the results of a getCurrentPosition()
call to make any sense, they must be defined in some coordinate
system known to the creator of the Script. Requiring every Script
to be part of the scene hierarchy just in case the Script makes
these calls is a bad solution, since it adds a restriction that
is unnecessary in most cases (most Script nodes will not care
about the position or orientation of the viewer). Requiring
some Script nodes to be defined in a particular coordinate system
but not requiring others is also a bad solution, because it
is inconsistent and error prone. And reporting positions and
orientations in some world coordinate system is also a bad solution,
because the world coordinate system may not be known to the
author of the Script. VRML worlds are meant to be composable,
with the world coordinate system of one world becoming just
another local coordinate system when that world is included
in a larger world.
The obvious
solution for the second problem is allowing Scripts to register
callback methods that the browser calls whenever the viewer's
position or orientation changes. This has all of the coordinate
system problems just described, plus scalability problems. Every
Script that registered these "tell-me-when-the-viewer-moves"
callbacks would make the VRML browser do a little bit of extra
work. In a very large virtual world, the overhead of informing
thousands or millions of Scripts that the viewer moved would
leave the browser no time to do anything else.
The not-so-obvious
solution that addressed all of these problems was to use the
position_changed and orientation_changed eventOuts
of the ProximitySensor. They were redefined to generate events
whenever the viewer moved inside the region defined by the ProximitySensor
instead of just generating events when the user crossed the
boundaries of the region, making it easy to ROUTE them to a
Script that wants to be informed whenever the viewer's position
or orientation changes. The coordinate system problems are solved
because ProximitySensors define a particular region of the world,
and so must be part of the scene hierarchy and exist in some
coordinate system.
The scalability
problem is solved by requiring world creators to define the
region in which they're interested. As long as they define reasonably
sized regions, browsers will be able to generate events efficiently
only for ProximitySensors that are relevant. If world creators
don't care about scalability, they can just define a very, very
large ProximitySensor (size 1e25 1e25 1e25 should be
big enough; assuming the default units of meters, it is about
the size of the observable universe and is still much smaller
than the largest legal floating point value, which is about
1e38).
Scripts
that just want to know the current position (or orientation)
of the user can simply read the position_changed (or
orientation_changed) eventOut of a ProximitySensor whenever
convenient. If the position_changed eventOut does not
have any ROUTEs coming from it, the browser does not have to
update it until a Script tries to read from it, making this
solution just as efficient as having the Script call a getCurrentPosition()
method.
|
EXAMPLE
(click to run):
The following example illustrates the use of the ProximitySensor
node (see Figure 3-45). The file contains three ProximitySensor
nodes. The first one, PS1, illustrates how to create a simple
HUD by defining the sensor's bounding box to enclose the
entire world (probably a good idea to put some walls up)
and then track the position and orientation of the user's
avatar during navigation. Then, adjust the HUD geometry
(a Sphere with a TouchSensor) to stay in view. Clicking
down on the SphereSensor/TouchSensor binds to a Viewpoint,
V2, and unbinds on release. The second ProximitySensor,
PS2, encloses the small pavilion on the left side of the
scene. On entering the sensor's bounding box, an AudioClip
greeting is started. The third ProximitySensor, PS3, encloses
the identical pavilion on the right side. On entering this
pavilion, a Cone floating inside begins a looping animation
and stops when the user exits the pavilion:
#VRML V2.0 utf8
Group { children [
Collision {
collide FALSE
children [
DEF PS1 ProximitySensor { size 100 10 100 }
DEF T1 Transform {
children Transform {
translation 0.05 -0.05 -.15 # Relative to view
children [
DEF TS TouchSensor {}
Shape {
appearance DEF A1 Appearance {
material Material { diffuseColor 1 .5 .5 }
}
geometry Sphere { radius 0.005 }
}
]}}]}
Transform {
translation -7 1 0
children [
DEF PS2 ProximitySensor {
center 2.5 1 -2.5 size 5 2 5
}
Sound {
location 2.5 1 2.5
maxBack 5 minBack 5
maxFront 5 minFront 5
source DEF AC AudioClip {
description "Someone entered the room."
url "enterRoom.wav"
}
}
DEF G Group { children [
DEF S Shape {
geometry Box { size 0.2 2 0.2 }
appearance DEF A2 Appearance {
material Material { diffuseColor 1 1 1 }
}
}
Transform { translation 5 0 0 children USE S }
Transform { translation 5 0 -5 children USE S }
Transform { translation 0 0 -5 children USE S }
Transform {
translation 2.5 2 -2.5
children Shape {
appearance USE A1
geometry Cone { bottomRadius 5.0 height 1.2 }
}
}]}]}
Transform {
translation 7 1 0
children [
DEF PS3 ProximitySensor {
center 2.5 1 -2.5 size 5 2 5
}
USE G
DEF T Transform {
translation 2.5 0 -2.5
children Shape {
geometry Cone { bottomRadius 0.3 height 0.5 }
appearance USE A1
}
}
DEF TIS TimeSensor {}
DEF OI OrientationInterpolator {
key [ 0.0, .5, 1.0 ]
keyValue [ 0 0 1 0, 0 0 1 3.14, 0 0 1 6.28 ]
}
]
}
Transform { # Floor
translation -20 0 -20
children Shape {
appearance USE A2
geometry ElevationGrid {
height [ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 ]
xDimension 5
zDimension 5
xSpacing 10
zSpacing 10
}
}
}
DirectionalLight {
direction -.707 -.707 0 intensity 1
}
Background { skyColor 1 1 1 }
NavigationInfo { type "WALK" }
DEF V1 Viewpoint {
position 5 1.6 18
orientation -.2 0 .9 0
description "Initial view"
}
DEF V2 Viewpoint {
position 10 1.6 10
orientation -.707 0 -.707 0
description "View of the pavilions"
}
]}
ROUTE TS.isActive TO V2.set_bind
ROUTE PS1.orientation_changed TO T1.rotation
ROUTE PS1.position_changed TO T1.translation
ROUTE PS2.enterTime TO AC.startTime
ROUTE PS3.isActive TO TIS.loop
ROUTE PS3.enterTime TO TIS.startTime
ROUTE TIS.fraction_changed TO OI.set_fraction
|

Figure 3-45: ProximitySensor Node Example
|