Chapter 3:
Node Reference
Intro
Anchor
Appearance
AudioClip
Background
Billboard
Box
Collision
Color
ColorInterpolator
Cone
Coordinate
CoordinateInterpolator
Cylinder
CylinderSensor
DirectionalLight
ElevationGrid
Extrusion
Fog
FontStyle
Group
ImageTexture
IndexedFaceSet
IndexedLineSet
Inline
LOD
Material
MovieTexture
NavigationInfo
Normal
NormalInterpolator
OrientationInterpolator
PixelTexture
PlaneSensor
PointLight
PointSet
PositionInterpolator
ProximitySensor
ScalarInterpolator
Script
Shape
Sound
Sphere
SphereSensor
SpotLight
Switch
Text
TextureCoordinate
TextureTransform
TimeSensor
TouchSensor
Transform
Viewpoint
VisibilitySensor
WorldInfo
|
NavigationInfo {
eventIn SFBool set_bind
exposedField MFFloat avatarSize [0.25, 1.6, 0.75] # [0, )
exposedField SFBool headlight TRUE
exposedField SFFloat speed 1.0 # [0, )
exposedField MFString type ["WALK", "ANY"]
exposedField SFFloat visibilityLimit 0.0 # [0, )
eventOut SFBool isBound
}
The NavigationInfo node contains information describing the physical
characteristics of the viewer's avatar and viewing model. NavigationInfo
node is a bindable node (see "2.6.10 Bindable children nodes")
and thus there exists a NavigationInfo node stack in which the top-most
NavigationInfo node on the stack is the currently bound NavigationInfo
node. The current NavigationInfo node is considered to be a child of
the current Viewpoint node regardless of where it is initially located
in the file. Whenever the current Viewpoint nodes changes, the current
NavigationInfo node must be re-parented to it by the browser. Whenever
the current NavigationInfo node changes, the new NavigationInfo node
must be re-parented to the current Viewpoint node by the browser.
TECHNICAL
NOTE: The avatarSize and speed fields of NavigationInfo
are interpreted in the current Viewpoint's coordinate system because
it works much better for worlds within worlds and it is much easier
to implement. You might take a model of a house, for example,
scale it down, and make it a toy house in a world you are creating.
If the user binds to a Viewpoint that is inside the house model,
the current NavigationInfo will be reinterpreted to be in that
coordinate space (i.e., scaled), making the user's avatar smaller
and making their navigation speed slower, both of which are desirable
to make navigation through the toy house easy. It is also easier
to implement because the browser only has to keep track of the
coordinate system of the current Viewpoint and doesn't have to
keep track of the coordinate system of the current NavigationInfo.
Note that some VRML browsers may support multiuser scenarios and
allow users to specify their own personal avatar geometry so they
can see each other as they move around the virtual world. These
avatar geometries must behave similarly to NavigationInfo and
be interpreted in the coordinate space of the current Viewpoint.
|
If a TRUE value is sent to the set_bind eventIn of a NavigationInfo
node, the node is pushed onto the top of the NavigationInfo node stack.
When a NavigationInfo node is bound, the browser uses the fields of
the NavigationInfo node to set the navigation controls of its user interface
and the NavigationInfo node is conceptually re-parented under the currently
bound Viewpoint node. All subsequent scaling changes to the current
Viewpoint node's coordinate system automatically
change aspects (see below) of the NavigationInfo node values used in
the browser (e.g., scale changes to any ancestors' transformations).
A FALSE value sent to set_bind pops the NavigationInfo node from
the stack, results in an isBound FALSE event, and pops to the
next entry in the stack which must be re-parented to the current Viewpoint
node. Section "2.6.10 Bindable children nodes" has more details
on the the binding stacks.
The type field specifies an ordered list of navigation paradigms
that specify a combination of navigation types and the initial navigation
type. The navigation type(s) of the currently bound NavigationInfo determines
the user interface capabilities of the browser. For example, if the
currently bound NavigationInfo's type is "WALK", the browser
shall present a WALK navigation user interface paradigm (see below for
description of WALK). Browsers shall recognize and support at least
the following navigation types: "ANY", "WALK", "EXAMINE", "FLY", and
"NONE".
If "ANY" does not appear in the type field list of the currently
bound NavigationInfo, the browser's navigation user interface shall
be restricted to the recognized navigation types specified in the list.
In this case, browsers shall not present user interface that allows
the navigation type to be changed to a type not specified in the list.
However, if any one of the values in the type field are "ANY",
the browser may provide any type of navigation interface, and allow
the user to change the navigation type dynamically. Furthermore, the
first recognized type in the list shall be the initial navigation type
presented by the browser's user interface.
ANY navigation specifies that the browser may choose the navigation
paradigm that best suits the content and provide user interface to allow
the user to change the navigation paradigm dynamically. When the currently
bound NavigationInfo's type value is "ANY", Viewpoint transitions
(see "3.53 Viewpoint") triggered by the
Anchor node (see "3.2 Anchor") or the
loadURL()scripting method (see "2.12.10 Browser script interface")
are undefined.
WALK navigation is used for exploring a virtual world on foot or in
a vehicle that rests on or hovers above the ground. It is strongly recommended
that WALK navigation define the up vector in the +Y direction and provide
some form of terrain following and gravity in order to produce a walking
or driving experience. If the bound NavigationInfo's type is
"WALK", the browser shall strictly support collision detection (see
"3.8 Collision").
FLY navigation is similar to WALK except that terrain following and
gravity may be disabled or ignored. There shall still be some notion
of "up" however. If the bound NavigationInfo's type is "FLY",
the browser shall strictly support collision detection (see "3.8 Collision").
EXAMINE navigation is used for viewing individual objects and often
includes (but does not require) the ability to spin around the object
and move the viewer closer or further away.
NONE navigation disables or removes all browser-specific navigation
user interface forcing the user to navigate using only mechanisms provided
in the scene, such as Anchor nodes or scripts that include loadURL().
If the NavigationInfo type is "WALK", "FLY", "EXAMINE", or "NONE"
or a combination of these types (i.e. "ANY" is not in the list),
Viewpoint transitions (see "3.53 Viewpoint")
triggered by the Anchor node (see "3.2 Anchor")
or the loadURL()scripting method (see "2.12.10 Browser script interface")
shall be implemented as a jump cut from the old Viewpoint to the new
Viewpoint with transition effects that shall not trigger events besides
the exit and enter events caused by the jump.
Browsers may create browser-specific navigation type extensions. It
is recommended that extended type names include a unique suffix
(e.g., HELICOPTER_mydomain.com) to prevent conflicts. Viewpoint
transitions (see "3.53 Viewpoint") triggered
by the Anchor node (see "3.2 Anchor")
or the loadURL()scripting method (see "2.12.10 Browser script interface")
are undefined for extended navigation types. If none of the types are
recognized by the browser, the default "ANY" is used. These strings
values are case sensitive ("any" is not equal to "ANY").
TECHNICAL
NOTE: It is recommended that you use your domain name for
unique suffix naming of new navigation types. For example, if
Foo Corporation develops a new navigation type based on a helicopter,
it should be named something like: HELICOPTER_foo.com to distinguish
it from Bar Corporation's HELICOPTER_bar.com.
|
TIP:
NONE can be
very useful for taking complete control over the navigation. You
can use the various sensors to detect user input and have Scripts
that control the motion of the viewer by animating Viewpoints.
Even "dashboard" controls--controls that are always in front of
the user--are possible (see the ProximitySensor node for an example of how to create a
heads-up display). |
The speed field specifies the rate at which the viewer travels
through a scene in meters per second. Since browsers may provide mechanisms
to travel faster or slower, this field specifies the default, average
speed of the viewer when the NavigationInfo node is bound. If the NavigationInfo
type is EXAMINE, speed shall not affect the viewer's rotational
speed. Scaling in the transformation hierarchy of the currently bound
Viewpoint node (see above) scales the speed;
parent translation and rotation transformations have no effect on speed.
Speed shall be non-negative. Zero speed indicates that the avatar's
position is stationary, but its orientation and field-of-view may still
change. If the navigation type is "NONE", the speed field
has no effect.
TIP:
A stationary
avatar's position is fixed at one location but may look around,
which is sometimes useful when you want the user to be able to
control their angle of view, but don't want them to be able to
move to a location in which they aren't supposed to be. You might
combine in-the-scene navigation to take the user from place to
place, animating the position of a Viewpoint, but allow the user
complete freedom over their orientation. |
The avatarSize field specifies the user's physical dimensions
in the world for the purpose of collision detection and terrain following.
It is a multi-value field allowing several dimensions to be specified.
The first value shall be the allowable distance between the user's position
and any collision geometry (as specified by a Collision node ) before a collision is detected. The second
shall be the height above the terrain at which the browser shall maintain
the viewer. The third shall be the height of the tallest object over
which the viewer can "step." This allows staircases to be built with
dimensions that can be ascended by viewers in all browsers. The transformation
hierarchy of the currently bound Viewpoint node scales the avatarSize. Translations
and rotations have no effect on avatarSize.
TIP:
The three
avatarSize parameters define a cylinder with a knee. The
first is the cylinder's radius. It should be small enough so that
viewers can pass through any doorway you've put in your world,
but large enough so that they can't slip between the bars in any
prison cell you've created. The second is the cylinder's height.
It should be short enough so that viewers don't hit their head
as they walk through doorways and tall enough so that they don't
feel like an ant running around on the floor (unless you want
them to feel like an ant . . .). And the third parameter is knee
height. (Humans have trouble stepping onto obstacles that are
higher than the height of our knees.) The knee height should be
tall enough so that viewers can walk up stairs instead of running
into them, but low enough so that viewers bump into tables instead
of hopping up onto them. |
TECHNICAL
NOTE: If a browser supports avatar geometry, it is up to the
browser to decide how to scale that geometry to fit within the
parameters given by the world author. However, since the author
may have specified general avatar size hints for a world, it makes
sense to consider the avatarSize field when using avatar
geometry in that world (e.g. use avatarSize to bound
and scale the avatar geometry).
|
TECHNICAL
NOTE: VRML 2.0 was designed to anticipate multiuser worlds,
but leaves out any multiuser functionality because multiuser systems
are still in the research and experimentation phase, and because
producing a single-user specification with interaction and animation
is a useful first step toward multiuser worlds. The avatarSize
field was particularly difficult to design because it is important
for both single-user and multiuser systems. The problem was how
much information about the user's virtual representation should
be included in a VRML 2.0 world. Solutions could range from nothing
at all to a complete specification of an Avatar node, including
geometry, standard behaviors, and so forth. A middle ground was
chosen that specifies just enough information so that world creators
can specify the assumptions they've made about the virtual viewer's
size and general shape when creating their world. No information
is included about how an avatar should look or behave as it travels
through the world. It is expected that each user will desire a
different virtual representation, and such information does not
belong in the virtual world but should be kept with the user's
personal files and registered with the VRML browser(s).
|

Figure 3-38: avatarSize Field
For purposes of terrain following, the browser maintains a notion
of the down direction (down vector), since gravity is applied
in the direction of the down vector. This down vector shall be along
the negative Y-axis in the local coordinate system of the currently
bound Viewpoint node (i.e., the accumulation of the Viewpoint node's
ancestors' transformations, not including the Viewpoint node's orientation
field).
TECHNICAL
NOTE: "Down" is a local, not a global, notion. There is not
necessarily one down direction for the entire world. Simply specifying
that down is the -Y-axis of the coordinate system of the currently
bound Viewpoint has a lot of very nice scalability benefits, and
allows the creation of worlds on asteroids and space stations,
where up and down can change dramatically with relatively small
changes in location. This does mean that implementations need
to interpret the user's navigation gestures in the coordinate
system of the current Viewpoint, but that should be fairly easy
because the implementation must already know the coordinate system
of the current Viewpoint to correctly perform any Viewpoint animations
that might be happening.
|
The visibilityLimit field sets the furthest distance the user
is able to see. Geometry beyond this distance may not be rendered. A
value of 0.0 (the default) indicates an infinite visibility limit. The
visibilityLimit field is restricted to be >= 0.0.
TECHNICAL
NOTE: A z-buffer is a common mechanism for performing hidden
surface elimination. The major problem with z-buffers is dealing
with their limited precision. If polygons are too close together,
z-buffer comparisons that should resolve one polygon being behind
another will determine that they are equal, and an ugly artifact
called z-buffer tearing will occur. Z-buffer resolution is enhanced
when the near clipping plane (which should be one-half the avatarSize;
[discussed later]) is as far away from the viewer as possible and
the far clipping plane is as near to the viewer as possible. Ideally,
the proper near and far clipping planes would be constantly and
automatically computed by the VRML browser based on the item at
which the user was looking. In practice, it is very difficult to
write an algorithm that is fast enough so that it doesn't cause
a noticeable degradation in performance and yet general enough that
it works well for arbitrary worlds. So, the world creator can tell
the browser how far the user should be able to see by using the
visibilityLimit field. If the user is inside an enclosed
space, set visibilityLimit to the circumference of the space
to clip out any objects that might be outside the space. You might
find that clipping out distant objects is less objectionable to
z-buffer tearing of near, almost-coincident polygons. In this case,
make visibilityLimit smaller to try to get better z-buffer
resolution for nearby objects. |
The speed, avatarSize and visibilityLimit values
are all scaled by the transformation being applied to the currently
bound Viewpoint node. If there is no currently
bound Viewpoint node, the values are interpreted in the world coordinate
system. This allows these values to be automatically adjusted when binding
to a Viewpoint node that has a scaling transformation applied to it
without requiring a new NavigationInfo node to be bound as well. If
the scale applied to the Viewpoint node is nonuniform, the behaviour
is undefined.
The headlight field specifies whether a browser shall turn
on a headlight. A headlight is a directional light that always points
in the direction the user is looking. Setting this field to TRUE allows
the browser to provide a headlight, possibly with user interface controls
to turn it on and off. Scenes that enlist precomputed lighting (e.g., radiosity
solutions) can turn the headlight off. The headlight shall have intensity = 1,
color = (1 1 1), ambientIntensity = 0.0,
and direction = (0 0 -1).
It is recommended that the near clipping plane be set to one-half
of the collision radius as specified in the avatarSize field
(setting the near plane to this value prevents excessive clipping of
objects just above the collision volume, and also provides a region
inside the collision volume for content authors to include geometry
intended to remain fixed relative to the viewer). Such geometry shall
not be occluded by geometry outside of the collision volume.
TECHNICAL
NOTE: The near clipping plane roughly corresponds to the surface
of your eyeballs. In general, things don't look good if they intersect
the near clipping plane, just as things don't look good when objects
intersect your eye! The current Viewpoint position can be thought
of as the center of your head. AvatarSize[1] specifies the
distance from the center of your body to your shoulders (defining
the width of an opening through which you can squeeze). Defining
the near clipping plane to be one-half of the avatarSize
roughly corresponds to a human body's physical geometry, with your
eyeballs about halfway from the center of the body to the shoulders.
Allowing geometry in front of the eyeballs but before the collision
radius gives content creators a useful place to put geometry that
should always follow the user around (see the ProximitySensor section
for details on how to create geometry that stays fixed relative
to the user). |
The first NavigationInfo node found during reading of the world is
automatically bound (receives a set_bind TRUE event) and supplies
the initial navigation parameters.
TECHNICAL
NOTE: Anchor is equivalent to a prototype containing a couple
of Group nodes, a Touch-Sensor, and a Script. It is a standard node
partly because it makes it easier to convert VRML 1.0 files (which
use WWWAnchor) to VRML 2.0, and partly because it is convenient
to have simple hyperlinking support prepackaged in a convenient
form. There are many hyperlinking tasks for which Anchor is inadequate.
For example, if you want a hyperlink to occur after the user has
accomplished some task, then you must use a Script node that calls
loadURL(). If you want to load several different pieces
of information into several other frames you will also have to use
a Script that makes several calls to loadURL(). The basic
building blocks of Scripts and sensors allow you to do almost anything;
the Anchor node is only meant to address the most basic hyperlinking
tasks |
EXAMPLE
(click to run):
The following example illustrates the use of the NavigationInfo
node. It contains two NavigationInfo nodes, each with a corresponding
ProximitySensor that binds and unbinds it. The idea is that within
each of the two regions bounded by the PromitySensors, a different
NavigationInfo is to be used. Note that the initial NavigationInfo
will be activated by the initial location of the viewer (i.e., the
first Viewpoint) and thus overrides the default choice of using
the first NavigationInfo in the file:
#VRML V2.0 utf8
Group { children [
DEF N1 NavigationInfo {
type "NONE" # all other defaults are ok
}
DEF N2 NavigationInfo {
avatarSize [ .01, .06, .02 ] # get small
speed .1
type "WALK"
visibilityLimit 10.0
}
Transform { # Proximity of the very small room
translation 0 .05 0
children DEF P1 ProximitySensor { size .4 .1 .4 }
}
Transform { # Proximity of initial Viewpoint
translation 0 1.6 -5.8
children DEF P2 ProximitySensor { size 5 5 5 }
}
Transform { children [ # A very small room with a cone inside
Shape { # The room
appearance DEF A Appearance {
material DEF M Material {
diffuseColor 1 1 1 ambientIntensity .33
}
}
geometry IndexedFaceSet {
coord Coordinate {
point [ .2 0 -.2, .2 0 .2, -.2 0 .2, -.2 0 -.2,
.2 .1 -.2, .2 .1 .2, -.2 .1 .2, -.2 .1 -.2 ]
}
coordIndex [ 0 1 5 4 -1, 1 2 6 5 -1, 2 3 7 6 -1, 4 5 6 7 ]
solid FALSE
}
}
Transform { # Cone in the room
translation -.1 .025 .1
children DEF S Shape {
geometry Cone { bottomRadius 0.01 height 0.02 }
appearance USE A
}
}
]}
Transform { children [ # Outside the room
Shape { # Textured ground plane
appearance Appearance {
material USE M
texture ImageTexture { url "marble.gif" }
}
geometry IndexedFaceSet {
coord Coordinate { point [ 2 0 -1, -2 0 -1, -2 0 3, 2 0 3 ] }
coordIndex [ 0 1 2 3 ]
}
}
]}
DEF V1 Viewpoint {
position 0 1.6 -5.8
orientation 0 1 0 3.14
description "Outside the very small house"
}
DEF V2 Viewpoint {
position 0.15 .06 -0.19
orientation 0 1 0 2.1
description "Inside the very small house"
}
DirectionalLight { direction 0 -1 0 }
DEF Background { skyColor 1 1 1 }
]}
ROUTE P1.isActive TO N1.set_bind
ROUTE P2.isActive TO N2.set_bind
|
|