Вы находитесь на странице: 1из 59

VRML File Structure

A VRML file always starts with the header:


#VRML V2.0 utf8
The identifier "utf8" which appears on the header lets you
use international characters in your VRML models.
Comments start with #, all characters until the end of the line
being ignored.
After the header the following can appear:
nodes
prototypes
routes
In the next sections one shall see what each of the above is
used for.
Note: For precise syntax and semantic definition please
consult the VRML 2.0 Specification.
The color model in VRML is RGB. In order to define a color
three values are needed: Red Green and Blue, these values
are between 0.0 and 1.0. For instance 0.0 0.0 0.0 is Black,
0.0 0.0 1.0 is Blue, and 1.0 1.0 1.0 is White.
The units in VRML are assumed to be meters. Although you
do not have to follow this convention it is advisable that you
do. If everyone follows this convention then all worlds will be
scale compatible.
The angles in VRML are measured in radians, not in
degrees.

Shape Node
All visible objects are defined inside a Shape node. This node has two fields: appearance and
geometry .
The appearance field specifies an Appearance node which is used to define color, textures
and so on to be applied to the geometry, the geometry field indicates which shape is to be
drawn.
Syntax:
Shape {
appearance NULL
geometry NULL
}

The appearance field is optional, if nothing is said the default values will be used.
the value for the geometry field may be any of the following Nodes:
Box
Cone
Cylinder
ElevationGrid
Extrusion
IndexedFaceSet
IndexedLineSet
PointSet
Sphere
Text

Box Node
The Box node defines a rectangular parallelepiped box which allows you to specify the width,
height, and depth of the box.
The node contains a single optional field size which has three floating point values, the default
values being applied if the field is not specified.
Syntax:
Box {
size 2.0 2.0 2.0
}

The center of the box is at (0,0,0) of the local coordinate system.

Sphere Node
The Sphere node has a single field which allows you to specify its radius.
The radius field is optional, if a radius is not specified then the radius is 1.0.
Syntax:
Sphere {
radius 1.0
}

The center of the sphere is at (0,0,0) of the local coordinate system.

Cone Node
The Cone node lets you specify not only the height and radius but also which parts you want
to draw.
This node contains four fields: bottomRadius, height, side, and bottom.
The bottomRadius and height fields define the geometrical properties of the cone. The values
for these fields must be greater than 0.0.
The side and bottom specify which parts of the cone are to be drawn. These two fields take
boolean values. For instance if side is FALSE then only a circle, the bottom of the cone, is
drawn. When drawing only a part of the shape note that only one side is lit, the opposite side
being black.
All fields are optional, the default values being applied if the field is not specified.
Syntax:
Cone {
bottomRadius 1.0
height 2.0
side TRUE
bottom TRUE
}

The center of the cone is at (0,0,0) of the local coordinate system.

Cylinder Node
The Cylinder node lets you specify not only the height and radius but also which parts you
want to draw.
This node contains five fields: radius, height, side, bottom, and top.
The radius and height fields define the geometrical properties of the cylinder. The values for
these fields must be greater than 0.0.
The side, bottom, and top specify which parts of the cylinder are to be drawn. These three
fields take boolean values. For instance if side is FALSE then only the bottom and top of the
cylinder are drawn. When drawing only parts of the shape note that only one side is lit, the
opposite side being black.
All fields are optional, the default values being applied if not specified.

Syntax:
Cylinder {
radius 1.0
height 2.0
side TRUE
bottom TRUE
top TRUE
}

The center of the cylinder is at (0,0,0) of the local coordinate system.

PointSet Node
The PointSet node specifies a set of 3D points in the local coordinate system and associated
colors.
This node contains two fields: color and coord.
The color field defines a Color node. The coord field specifies a Coordinate node.
The color is optional, the default values being applied if the field is not specified. If a Material
node is specified the default color is the emissive color from this node. Note that both the
default emissive color and background are black, so if no color is specified and the default
background is used you won't be able to see the points.
There must be as many components in the color field as in the coord field.
Syntax:
PointSet {
color NULL
coord NULL
}

IndexedLineSet Node
The IndexedLineSet node specifies a set of polylines in the local coordinate system and
associated colors.
This node contains five fields: coord, coordIndex, color, colorIndex and colorPerVertex.
The coord field specifies a Coordinate node. In this node a set of 3D coordinates are given.
The coordIndex field specifies a list of coordinate indexes defining the polylines to be drawn.
To separate the indexes from one polyline a space is used, to separate the sets of indexes
from two adjacent polylines the marker -1 is used, i.e. an index of -1 indicates that the current
polyline has ended and the next one begins.
The color field defines a Color node. This node defines a list of colors to apply to the
polylines. The color is optional, the default values being applied if the field is not specified. If a
Material node is specified the default color is the emissive color from this node. Note that both
the default emissive color and background are black, so if no color is specified and the default
background is used you won't be able to see the lines.
The colorIndex serves the same purpose has the coordIndex but regarding colors.
The colorPerVertex is a boolean field which defines how the colors are applied.
 colorPerVertex is TRUE: colors apply to each vertex. The final result is that each line
starts in one color and ends with another color, producing a gradient effect. On some
browsers the color for each line will be the average color for the two ends of the line. There
must be as many indices in colorIndex as in coordIndex with the end-of-polyline markers,-1, in
exactly the same place. If the colorIndex is absent then the coordIndex is used to select the
colors and there must be as many colors as indices in coordIndex.
 colorPerVertex is FALSE: colors apply to each polyline. There must be at least as many
indices in the colorIndex as there are polylines. If the coordIndex field is absent then the
colors are applied in the order presented in the color field.
Before presenting the syntax let's see some examples. In all the examples presented, the
coord field has the following coordinates: 0 0 0 1 1 0 -1 0 0 -1 -1 0 Playing with coordIndex

Playing with colors: The colors specified are Red, Green and Blue
colorPerVertex TRUE colorPerVertex FALSE
colorIndex [0 1 2 0 ] colorIndex [0 1]
coordIndex [3 0 2 1] coordIndex [0 1 -1 2 3]

Note: On the example which had colorPerVertex set to TRUE the colors presented for each
line are the average color from the two colors defined for the end of each line.
Syntax:
IndexedLineSet {
coord NULL
coordIndex [ ]
color NULL
colorIndex [ ]
colorPerVertex TRUE
}

IndexedFaceSet Node
The IndexedFaceSet node specifies a set of planar faces in the local coordinate system.
The coord field specifies a Coordinate node. In this node a set of 3D coordinates are given.
The coordIndex field specifies a list of coordinate indexes defining the faces to be drawn. To
separate the indexes from a face the marker -1 is used, i.e. an index of -1 indicates that the
current face has ended and the next one begins.
Because the faces are always defined by closed polylines, you don't need to define the first
point twice. Consider the following values for coord:
coord Coordinate{
point [ 0 0 0, 1 0 0, 1 1 0, 0 1 0]
}
Four points are defined, when you join the points you can make a square face. The
coordIndex used for a square face could be:
coordIndex [ 0 1 2 3]
What the above coorIndex meaning is: Join the first point in the coordinate list to the second
point, then join the second to the third, the third to the fourth, and finally the fourth to the first
to close the region which defines the shape.
The color field defines a Color node. This node defines a list of colors to apply to the faces.
The color is optional, the default values being applied if the field is not specified. If a Material
node is specified the default color is the emissive color from this node. Note that both the
default emissive color and background are black, so if no color is specified and the default
background is used you won't be able to see the faces.
The colorIndex serves the same purpose as the coordIndex but regarding colors. IF
colorIndex is not specified then coordIndex is used instead.
The colorPerVertex is a boolean field which defines how the colors are applied. The meaning
is similar to the IndexedLineSet case.
texCoord specifies a TextureCoordinate node.
texCoordIndex has a similar meaning to coordIndex but applied to Textures.
There are 3 normal fields which have the same meaning than the color fields but applied to
normals. The normal field defines a Normal node.
The ccw field specifies if the points which define a face are present counterclockwise, TRUE,
or clockwise or unknown order, FALSE. A face has two sides and sometimes it is important to
know which is the front side, and which is the back side. So let's assume you're defining a
single face perpendicular to the Z axis and ccw is TRUE. If the face is defined
counterclockwise then the front side is the side facing you. Otherwise the back side is facing
you. If ccw is FALSE then the opposite occurs.
The solid field determines if the browser should draw both sides of a face or just the front
side. VRML assumes by default, solid is TRUE that the faces in an IndexedFaceSet form a
solid shape. In this case there is no need to draw the back sides of each face. If solid is
FALSE then the browser will draw both sides of each face.
The convex field specifies if the faces being defined in coordIndex are convex or not. VRML
can only draw convex faces. When presented with concave faces, the browser splits the face
into smaller convex faces. This is a time consuming task. If you are sure that all your faces
are convex then setting this field to TRUE tells the browser not to worry splitting the faces,
and therefore saving time. The creaseAngle specifies an angle threshold. If two adjacent
faces make an angle bigger than the creaseAngle then you'll see clearly where the two faces
meet, the edge linking the two faces is sharp. Otherwise the edge linking the two faces will be
smooth.
Syntax:
IndexedFaceSet {
coord NULL
coordIndex [ ]
color NULL
colorIndex [ ]
colorPerVertex TRUE
normal NULL
normalIndex [ ]
normalPerVertex TRUE
texCoord NULL
texCoordIndex [ ]
ccw TRUE
convex TRUE
solid TRUE
creaseAngle 0.0
}

Extrusion Node
The IndexedFaceSet node although a very powerful node, requires you to define all the faces
in the shape. For certain shapes this requires loads of points and faces. For other shapes this
is almost impossible. For instance consider designing a sphere with an indexed face set.
Extrusion is a very powerful node which allows you to define very complex shapes using only
a small amount of points.
Syntax:
Extrusion {
beginCap TRUE
endCap TRUE
ccw TRUE
convex TRUE
creaseAngle 0
crossSection [1 1,1 -1, -1-1, -1 0, 1 1]
orientation 0 0 1 0
scale 1 1
solid TRUE
spine [0 0 0, 0 1 0]
}

The basis of an extruded shape is a 2-D cross section of the final shape. For example,
consider a cube. The cross section is a square. Cross sections are defined in the XZ plane.
The cross section for a cube could be defined using the following points:

Note the Z axis orientation, the Z axis is positive downwards, not upwards. In 3D, using the
default View Point this means that points closer to you have a higher Z value than points
further away.
Another concept needed in an extrusion is the spine. The spine defines the path that the
cross section will travel to create the shape. In the above example, trying to build a cube, one
could start with the cross section at (0,-1,0) and move it upwards to (0,1,0). The following
figures shows this spine for the cube, and the respective path for the cross section.

The spine in the above figure is defined by two points, (0,-1,0) and (0,1,0). The list of steps
that the browser does to draw an extruded shape with two spine points are:
 Translate the cross section to the first spine point
 Reorient the cross section, defined in the XZ plane so that the Y axis coincides with the
direction obtained with the two spine points (in the example above this step is not necessary).
 Move the cross section to the last spine point. When executing the last step the browser
will create the side walls of the cube. The end result is presented in the following figure.

Source code for the above figure (without the axes)


Example:
#VRML V2.0 utf8
Transform {
children
Shape{ appearance Appearance { material Material {}}
geometry Extrusion{
crossSection [ -1 -1, -1 1, 1 1, 1 -1, -1 -1]
spine [0 -1 0 , 0 1 0 ]}
}
}

The fields beginCap and endCap specify if the extruded shape is open or closed at the ends.
For instance if in the above example both endCap and beginCap were set to false the
following figure would be produced

Notice that you can only see two sides of the cube, the ones that face you. This is because
the field solid is set to TRUE by default. Setting this field to false yields the following result

In the above examples the spine had only two points. That is the simplest spine you can
have. However there is nothing preventing you from having more spine points, for example to
draw a V shape, or even something more complicated.
The principle is always the same however there is something which deserves to be
mentioned. As mentioned before, when using a two point spine, the cross section is oriented
so that the Y axis coincides with the direction defined by the two spine points. When using
more spine points this is only valid for the first spine point. The remaining spine points behave
slightly differently.
The second and subsequent spine points orient the cross section so that is perpendicular to
the tangent of the spine. The following figure, presented in 2D for simplicity reasons, shows
the cross section orientation in a 3 point spine.

The points in the figure are the spine points. The dotted lines show the path defined by the
spine. At each spine point the cross section's orientation is presented. Note that the cross
section's orientation for the second point is perpendicular to the spine's tangent.
A V shaped spine, defined by the points: (3,5,0), (0,0,0), (-3,5,0), with the square cross
section produces the following result:

Source code for the above figure (without the axes)


Example:
#VRML V2.0 utf8
Transform {
children
Shape{ appearance Appearance { material Material {}}
geometry Extrusion{
crossSection [ -1 -1, -1 1, 1 1, 1 -1, -1 -1]
spine [3 5 0 , 0 0 0, -3 5 0]}
}
}

Extrusion can also be used to create surfaces of revolution using a circular spine. The
following example describes how to build a cone by using extrusion as a surface of revolution.
The cross section is defined by the following points: (-1,0), (0,0), (-2,-1), (-1,0). Note that the
cross section repeats the first point, if you don't do that then you may end up with a non-solid
shape. The spine is defined by selecting eight equally spread points in a unitary circle. The
following figure depicts the cross section (note that the Z axis points downwards), and a
circular spine defined in the XY plane.
Please take notice of the cross section points. The point (0,0) is the point that coincides with
the spine's point. The end result is the following figure:

Source code for the example above (without the axes):


Example:
#VRML V2.0 utf8
Transform {
children
Shape{ appearance Appearance { material Material { }}
geometry Extrusion{
crossSection [ -1 0, 0 0, -1 -2, -1 0]
spine [1 0 0 , 0.707 0 0.707 , 0 0 1 ,-0.707 0 0.707 ,
-1 0 0 ,-0.707 0 -0.707 , 0 0 -1 ,0.707 0 -0.707 ,1 0 0]
}
}
}

Wait, there's still more, you can scale and orient the cross section for each spine point
specified. In the cube example presented above, you can scale the cross section for the
second spine point by (0,0), thereby reducing it to a point, to obtain a pyramid as in the
following figure.

When using the field scale you either specify one scale for the whole shape, or a list of scale
factors for each spine point ( note that scales are provided in 2D because they relate to the
cross section).
Source code for the above figure (without the axes)
Example:
#VRML V2.0 utf8
Transform {
children
Shape{ appearance Appearance { material Material {}}
geometry Extrusion{
crossSection [ -1 -1, -1 1, 1 1, 1 -1, -1 -1]
spine [0 -1 0 , 0 1 0 ]
scale [1 1, 0 0]
}
}
}

Or you could twist the spine's path using rotations to obtain the following picture

Similarly to the scale field, when using the field orientation you either specify one orientation
for the whole shape, or a list of orientation factors for each spine point. Orientations require
four values each: three to define the axis of rotation and one to define the angle.
Source code for the above figure (without the axes)
Example:
#VRML V2.0 utf8
Transform {
children
Shape{ appearance Appearance { material Material {}}
geometry Extrusion{
crossSection [ -1 -1, -1 1, 1 1, 1 -1, -1 -1]
spine [0 -1 0 , 0 1 0 ]
orientation[0 1 0 0, 0 1 0 3.14]
}
}
}

The ccw field specifies if the points which define the cross section are present
counterclockwise, TRUE, or clockwise or unknown order, FALSE.
The convex field specifies if the cross section is convex or not. When presented with concave
cross sections, the browser splits the cross section into smaller convex cross sections. This is
a time consuming task. If you are sure that the cross section is convex then setting this field
to TRUE tells the browser not to worry splitting the cross section therefore saving time.
The creaseAngle specifies an angle threshold. If two adjacent faces make an angle bigger
than the creaseAngle then you'll see clearly where the two faces meet, the edge linking the
two faces is sharp. Otherwise the edge linking the two faces will be smooth.

ElevationGrid Node
The ElevationGrid node specifies a grid of points, each with a used defined height. This node
is useful to create meshes, see Lighting, or to build a terrain.
An ElevationGrid is built in the XZ plane, starting from the origin and expanding in the positive
direction of the axes. The shape of an ElevationGrid is defined by the following fields:
 xDimension: the number of points in the grid along the X axis.
 zDimension: the number of grid points in the Z axis.
 xSpacing: the distance between two adjacent points in the X axis direction.
 zSpacing: the distance between two adjacent points in the Z axis direction.
 height: a list of floating point values specifying the height for each point in the grid. The
points are ordered left to right and top to bottom.
The other fields for this node are:
The color field defines a Color node. This node defines a list of colors to apply to the faces.
The color is optional, the default values being applied if the field is not specified. If a Material
node is specified the default color is the emissive color from this node. Note that both the
default emissive color and background are black, so if no color is specified and the default
background is used you won't be able to see the faces.
The colorPerVertex is a boolean field which defines how the colors are applied. The meaning
is similar to the IndexedLineSet case.
texCoord specifies a TextureCoordinate node.
There are 2 normal fields which have the same meaning than the color fields but applied to
normals. The normal field defines a Normal node.
The ccw field specifies if the points which define a face are present counterclockwise, TRUE,
or clockwise or unknown order, FALSE. A face has two sides and sometimes it is important to
know which is the front side, and which is the back side. So let's assume you're defining s
single face perpendicular to the Z axis and ccw is TRUE. If the face is defined
counterclockwise then the front side is the side facing you. Otherwise the back side is facing
you. If ccw is FALSE then the opposite occurs.
The solid field determines if the browser should draw both sides of a face or just the front
side. VRML assumes by default, solid is TRUE that the faces in an IndexedFaceSet form a
solid shape. In this case there is no need to draw the back sides of each face. If solid is
FALSE then the browser will draw both sides of each face.
The convex field specifies if the faces being defined in coordIndex are convex or not. VRML
can only draw convex faces. When presented with concave faces, the browser splits the face
into smaller convex faces. This is a time consuming task. If you are sure that all your faces
are convex then setting this field to TRUE tells the browser not to worry splitting the faces,
and therefore saving time. The creaseAngle specifies an angle threshold. If two adjacent
faces make an angle bigger than the creaseAngle then you'll see clearly where the two faces
meet, the edge linking the two faces is sharp. Otherwise the edge linking the two faces will be
smooth.
Syntax:
ElevationGrid {
xDimension 0
xSpacing 0.0
zDimension 0
zSpacing 0.0
height [ ]
color NULL
colorPerVertex TRUE
normal NULL
normalPerVertex TRUE
texCoord NULL
ccw TRUE
convex TRUE
solid TRUE
creaseAngle 0.0
}
The following figure depicts the grid built using this node using a list of heights with all
elements equal to 0.0.

Elevation Grid Example:


Chessboard
A chess board is made of 64 tiles, in a 8x8 grid. The tiles are colored black and white, so that
two tiles sharing a side have different colors.
As far as defining the tiles, The ElevationGrid for this purpose is as follows:
Example:
geometry ElevationGrid {
xDimension 9
zDimension 9
xSpacing 1
zSpacing 1
height [
000000000
000000000
000000000
000000000
000000000
000000000
000000000
000000000
000000000
]}

Note that the dimensions specified are the number of tiles plus one. The same goes for the
list of heights, there are xDimension x zDimension height values specified.
Now we need to define the colors. First set the field colorPerVertex to FALSE. This implies
that the colors defined are applied to each square of the chessboard, and not to the vertices.
Next set the color field to a Color node with 64 values, alternating between black and white.
The complete code is as follows:
Example:
geometry ElevationGrid {
xDimension 9
zDimension 9
xSpacing 1
zSpacing 1
height [
000000000
000000000
000000000
000000000
000000000
000000000
000000000
000000000
000000000
]
colorPerVertex FALSE
color Color {
color [
0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1,
1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0,
0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1,
1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0,
0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1,
1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0,
0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1,
1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0, 1 1 1, 0 0 0,
]
}}

Text Node
The Text node allows you to display strings in your VRML world. This node has the following
fields:
 string contains the text to be displayed. This field can have one or more lines of text. In
case of multiple lines each line is a string.
 fontStyle specifies a FontStyle node. This node lets you define how the text is presented.
 length specifies the length of each string in VRML units, not in character units. If the string
is too short it is scaled, if it is too large it is compressed. A value of 0 means that the string
should neither be expanded nor compressed. By default this is a list of zero values.
 maxLength limits, scaling down if necessary, all strings in the field string.
Syntax:
Text {
string [ ]
fontStyle NULL
length [ ]
maxExtent 0.0
}

FontStyle Node
This node is used inside the Text node to specify the display properties of the strings. The
following fields are present:
 family specifies the font family. Values for this field are: "SERIF", "SANS",
"TYPEWRITER".
 style specifies the font style. Values for this field are: "PLAIN", "BOLD", "ITALIC",
"BOLDITALIC".
 horizontal is a boolean field which specifies if the text is to be displayed horizontally.
 leftToRight is a boolean field which specifies if the text is displayed left to right or right to
left (Arabic way)
 topToBottom is a boolean field which specifies if the text is displayed top to bottom or
bottom to top (Chinese way).
 justify specifies justification both along the major and the minor direction. If horizontal is
TRUE then the major direction is horizontal and the minor direction is vertical, otherwise the
major direction is vertical and the minor direction is horizontal. If only one value is specified
then it refers to the major direction. The default value for the minor direction is "FIRST". There
are four possible values for each justification: "FIRST", "BEGIN" (left justified), "MIDDLE"
(centered), and "END" (right justified). "BEGIN" and "FIRST" are equivalent except for the
minor direction when horizontal is TRUE (see note bellow).
 language specifies the character set. "en" for English,"en_US" for US english, "zh" for
Chinese, etc...
 size specifies character height in VRML units.
 spacing specifies the spacing between lines.
Syntax:
FontStyle {
family "SERIF"
horizontal TRUE
justify "BEGIN"
language ""
leftToRight TRUE
size 1.0
spacing 1.0
topToBottom TRUE
}

Important: VRML is case-sensitive, be careful.


Note: When horizontal is TRUE the following difference occurs in the minor direction
justification between using "FIRST" and "BEGIN":
"FIRST": The baseline of the first line is placed at the X axis.
"BEGIN": If topToBottom is TRUE the top edge of the first line is the placed at the X axis, else
the bottom edge of the first line is placed at the X axis.

Appearance Node
The appearance node defines the look of the geometry. This node can only be defined inside
a Shape node.
The fields included in this node are: material, texture, and textureTransform. All fields are
optional but at least one field should be specified.
The material field contains a Material node. This node specifies the color of the associated
geometry, see the Shape node, and how the geometry reflects light. Note that if this field is
NULL or unspecified then all lights defined in the world are ignored when rendering the
associated geometry.
The texture field contains one of the texture nodes available: ImageTexture, MovieTexture, or
PixelTexture. If this field is absent or unspecified no textures are applied to the associated
geometry.
The textureTransform field contains a TextureTransform node. This node specifies how the
texture is applied to the geometry.

Syntax:
Appearance {
material NULL
texture NULL
textureTransform NULL
}

Material Node
The material node specifies color, light reflection and transparency. This node can only be
defined inside an Appearance node.
This node has six fields: diffuseColor, emissiveColor, ambientIntensity, shininess,
specularColor, and transparency.
The diffuseColor field defines the color of the geometry. Note: This field is ignored when using
colored textures.
The emissiveColor is used to define glowing objects.
The ambientIntensity field specifies the amount of light reflected by the geometry.
The specularColor field defines the color of the shiny spots of the geometry.
The shininess controls the intensity of the glow for the shiny spots, small values represent soft
glows, whereas high values define smaller and sharper highlights.
The transparency field controls the transparency of the associated geometry (I know, it is a
circular definition...). If a value of 0.0 is specified then the related geometry is completely
opaque, a value of 1.0 means that the geometry is completely transparent.
All the "Color" fields have an RGB value associated, i.e., three floating point values between
0.0 and 1.0. The other fields have a single floating point value between 0.0 and 1.0.
Syntax:
{
diffuseColor 0.8 0.8 0.8
ambientIntensity 0.2
emissiveColor 0.0 0.0 0.0
specularColor 0.0 0.0 0.0
shininess 0.2
transparency 0.0
}

Textures
VRML 2.0 allows you to use Images, Movies and Pixel defined images to texture your shapes.
When texturing a shape the texture is applied by default to each of the faces of the shape.
The following figure shows an example using an image to texture all the primitive shapes:
The texture image used in the above example is

One can also specify in this node if the texture is to be repeated for each of the faces of the shape.
The following figure shows an example for a single square face using the same image as above
for texturing:

As you can see from the above example the image can be repeated verticaly, horizontaly, both, or
none.
There is still more, you can translate, and rotate the texture as the next figure shows.

In the above example the number of images is set to 4 in both dimensions, the image is translated
(see the lower left corner), and finally the texture is rotated roughly 45 degrees (remember that in
VRML the angles are measured in radians, the rotation in radians use was 0.75).
In order to fully understand the the understanding of the texture coordinate system is needed. A
texture is represented in a 2-D coordinate system (s,t) that ranges from 0 to 1 in both directions.

The geometric operations mentioned above, scale, translation, and rotation, are aplied in this
coordinate system.
See the TextureTransform and TextureCoordinate (only applies to IndexedFaceSet and
ElevationGrid) for more detailed information.

Combining Textures with Materials


Image Types
 color: The shape's diffuse color specified in the Material node is ignored.
 grayscale: The image's gray values are multiplied by the diffuse color.
The following image shows a grayscale image applied to 4 cubes with different diffuse colors: red,
green, blue and white.

Transparency
Some file formats support transparency levels for pixels. PNG fully implements this concept. Not
all GIF formats support pixel transparency level, instead a color can be selected to be transparent.
JPEG file format does not support transparency information.
When using an image with pixel transparency, the transparency level overrides the transparency
of the Material node. JPEG file format does not support transparency information.

ImageTexture Node
This node specifies the location of the image to be used for texturing the shape, as well as if
the image is to be repeated vertically, or horizontally, along each of the faces of the shape.
Three fields are present in this node:
 url which specifies the location of the image. Valid image formats are JPEG, GIF and
PNG. You can specify multiple locations if you want to, the browser will look for data in those
locations in decreasing order of preference.
 repeatS which specifies if the image is to be repeated vertically.
 repeatT which specifies if the image is to be repeated horizontally. All fields are optional,
the default values being applied if the field is not specified. Note: if you do not specify the
location of the image, url, then no texturing takes place.
Syntax:
ImageTexture {
url [ ]
repeatS TRUE
repeatT TRUE
}

Image File Formats in VRML The following formats should be recognized by VRML browsers:
 JPEG, JPG: Joint Photographic Experts Group
 GIF: Graphical Interchange Format
 PNG: Portable Network Graphics From the above formats JPEG is the only one which
does not allow images to have transparent pixels.
Note: for the example on the right the image will be repeated twice in each direction if the
values for repeatS and repeatT are set ot TRUE. See the node TextureTransform to see how
to set the repetition rate.
MovieTexture Node
This node specifies the location of the movie to be used for texturing the shape. The movie
must be in the MPEG format. , as well as if the image is to be repeated vertically, or
horizontally, along each of the faces of the shape.
The following fields are present in this node:
 loop specifies if the movie is to play repeatedly, see notes after field definition.
 speed specifies how fast the movie will play, for instance if speed is 2 then the film will
play twice as fast. Negative speeds play the film backwards.
 startTime specifies the starting Time of the movie in seconds. The value of this field is the
number of seconds since midnight, January the first, 1970.
 stopTime specifies the stopping time of the movie in seconds. The value of this field is the
number of seconds since midnight, January the first, 1970.
 url which specifies the location of the movie. You can specify multiple locations if you want
to, the browser will look for data in those locations in decreasing order of preference.
 repeatS which specifies if the movie is to be repeated vertically.
 repeatT which specifies if the movie is to be repeated horizontally. Notes:
 In VRML the world was created at midnight, January the first, 1970. Some say that the
reason for choosing this date as the beginning of time as to do with the birth of the Unix
system.
 If the loop is set to TRUE and startTime >= stopTime then the movie will run forever.
However if startTime < stopTime the movie will stop as soon as stopTime is reached.
 If startTime >= stopTime then the movie should start as soon as startTime is reached.
Note that some browsers only start the movie when startTime > stopTime. This is because in
the early drafts of the VRML 2.0 specification this later condition was required to start the
movie. All fields are optional, the default values being applied if the field is not specified. Note:
if you do not specify the location of the movie, url, then no texturing takes place.
Syntax:
MovieTexture {
loop FALSE
speed 1
startTime 0
stopTime 0
url [ ]
repeatS TRUE
repeatT TRUE
}

The MovieTexture node has two eventOut fields: duration_changed and isActive. A
duration_changed event is generated when the current url is changed, the value for this event
is the time in seconds of the sound. Note that changing the speed does not produce a
duration_changed event. The event isActive will output a value TRUE when the movie starts
playing, when the movie stops the event isActive will output FALSE.
Note: for the example on the right the movie will be repeated twice in each direction if the
values for repeatS and repeatT are set ot TRUE. See the node TextureTransform to see how
to set the repetition rate.

PixelTexture Node
This node specifies the location of the image to be used for texturing the shape, as well as if
the image is to be repeated vertically, or horizontally, along each of the faces of the shape.
Three fields are present in this node:
 image which defines the image using pixels.
 repeatS which specifies if the image is to be repeated vertically.
 repeatT which specifies if the image is to be repeated horizontally. The first three values of
the image field define the width of the image (in pixels), the height of the image (in pixels),
and the number of bytes used for each pixel. Possible values for the number of bytes are:
 1: Grayscale
 2: Grayscale with alpha channel for transparency
 3: RGB
 4: RGB with alpha channel for transparency All fields are optional, the default values being
applied if the field is not specified. Note: if you do not specify the image field, then no texturing
takes place.
Specifying an image pixel by pixel may seem like hard work, however some interesting effects
can be achieved with little effort. For example see the following PixelTexture node:
PixelTexture { image 2 1 1 0 255}
Note that the color values for the PixelTexture node range from 0 to 255, as opposed to 0 to 1
as in the VRML color model used for all the other nodes.
This node specifies an image with two pixels wide, 1 pixel tall, the first pixel is black, and the
second is white. If this image is applied as a texture to a face then, because the image is
scaled to fit the face, a gradient should be displayed starting from black on the left side and
progressively turning to white as the right side is reached (note: some browsers do not yet
implement this feature and instead will produce a two color image, the left half black and the
right half white).
This node can also be used to create patterns, although a pattern editor is probably the best
option for this task.
Syntax:
PixelTexture {
image 0 0 0
repeatS TRUE
repeatT TRUE
}

Note: for the example on the right the image will be repeated twice in each direction if the
values for repeatS and repeatT are set ot TRUE. See the node TextureTransform to see how
to set the repetition rate.

TextureCoordinate
A texture is represented in a 2-D coordinate system (s,t) that ranges from 0 to 1 in both
directions.

This node takes a set of points in 2-D to define how a texture is applied to IndexedFaceSet
and ElevationGrid. If this node is not specified the texture is applied to the shape as a whole.
Specifying a TextureCoordinate node causes the texture to be applied to each face of the
shape according to the coordinates given.
Syntax:
TextureCoordinate {
point [ ]
}
the field point takes a set of 2-D coordinates which define the selected area. The points are
separated by spaces.
The ordering of the points is relevant. The order should be counter clockwise starting from the
origin to keep the orientation of the texture. If the start point is different then the texture is
rotated by multiples of 90 degrees. Presenting the point in a clockwise direction will mirror the
texture. Try it yourself of the right frame.
It is now described how TextureCoordinate points define texturing for a single face.
Afterwards particular features of textures for IndexedFaceSet and ElevationGrid are
presented.
You can select only a part of the texture, for instance if the points given are (0 0) (0.5 0) (0.5
0.5) (0 0.5). Only a quarter of the texture will be used

The next figure shows the result obtained.

You can also use this node to have the texture repeated, for instance consider the following
points: (0 0), (2 0),(2 2), (0 2). In this case the texture is repeated twice in each dimension.

More complex settings can be used which restrict the texture in one dimension but repeat it in
the other dimension. For example consider the points (0 0) (0.5 0) (0.5 2) (0 2). The left image
shows the result of using this texture coordinates in a square face, the right image shows the
area selected in terms os texture coordinates.
In all the examples above the area selected was either square or rectangular. There is
nothing preventing you from choosing non-rectangular areas, however the number of points
must match the number of coordinates used to build the face, i.e. you can't match a triangular
selection into a square face.
The next two sections describe particular features of IndexedFaceSet and ElevationGrid,

IndexedFaceSet
The number of points should match the number of coordinates used to define the face, if
multiple faces are defined then the number of points must agree with the number of points
used to define each face.
The number of points in the TextureCoordinate must be at least equal to the number of points
of the face with highest number of points.
If the field texCoordIndex is not NULL then the values of this field specify the indexes of the
TextureCoordinate points which are used to define the selection for each face.
If the field texCoordIndex is NULL then the field coordIndex is used to specify the indexes.

ElevationGrids
When you specify an ElevationGrid you define a grid in which each point of the grid has a
height. By default, i.e. if no TextureCoordinate is defined, texturing is controlled by
TextureTransform. However sometimes you may want more control in the way textures are
applied.
A texture coordinate is defined for each point in the grid. The first point in the grid
corresponds to the first point in the TextureCoordinate node, the last point in the grid
corresponds to the last point in the TextureCoordinate node. There must be as many points in
the ElevationGrid as there are in the TextureCoordinate node.

TextureTransform Node
This node allows you to perform simple geometric transformations, scale, rotation, and
translation, on texture coordinates, see the TextureCoordinate node. This node is defined
inside an Appearance node.
Syntax:
TextureTransform {
scale 1 1
rotation 0
center 0 0
translation 0 0
}
The value of the center field specifies the center point about which rotation and scale takes
place.
When all fields are used in combination then the coordinates specified in the
TextureCoordinate node are first scaled and rotated about the center point, and finally
translated.

Lighting Nodes
Up until know the worlds you've seen in this tutorial have been lit by a special light, the
headlight. This light is created by your browser and it is attached to the current viewpoint. This
light always points to where you're looking at. It is like if you had a light attached to your head.
This light can be turned on or off using the browsers options or with the NavigationInfo node.
VRML supports three additional types of lights. They are:
 Directional Light
 Point Light
 Spot Light
When using a Directional Light the light rays are all parallel. This light has no defined location,
only a direction. It is as if the light is far, far away from your world.
A Point Light is a light, placed in your world, which brightens everything around it, the light
rays go in all directions from the light's location. Think of the sun for instance.
A Spot Light is a ... spotlight. This type of light creates a cone of light.
Further informationspecific to each type of light is provided in the links above. However if
you're new to lighting in VRML then the following should be of some use to help you
understand how it is done in VRML. Note: in all the figures bellow the headlight is turned off.

Light Reflection
In theory when light rays hit an object the object may reflect the light rays depending on its
color and the color of the light. Light reflection depends on the properties of the object being
lit. Surely you have seen 3D realistic static images where this effect is present.
However computing light reflection on the fly is hard work. In order to display 3D worlds
interactively some short cuts had to done in order for the action to be as smooth as possible.
Therefore there is no reflected light in VRML, only direct light is available. This means that if
an object is not in the path of the light rays from any of the lights placed in your world it will
remain dark.
As a replacement for light reflectiont, the lights in VRML have a field called ambientIntensity.
This field controls how much the light contributes to the overall world lighting. With high
values for ambientIntensity the world will be a brighter place. Although a crude replacement it
can add some realism to yuor world.

Light Attenuation and Scope


Another real world lighting effect is that the light gradually grows weaker with distance. In
VRML this feature is implemented. With the field attenuation you can specify how the light
drops off as distance increases.
Note that this field only exists for Point Light and Spot Light, it doesn't apply to Directional
Light. This is because, as mentioned above, the Directional Light does not have a defined
location in the world.
So how does one control lighting attenuation with Directional Light? This type of light only
iluminates the objects which are placed within the group where the light is defined. On the
other hand, the Point Light and Spot Light are independent of their position within the file, i.e.
they're lighting effect is not restricted to the group in which they are defined.

Shadows
Look at the following figure. In it there is a Point Light placed at the left, and two Spheres in
the right. The Point Light and the center of the Spheres are colinear.
According to the text above the left sphere should block the light rays from reaching the
sphere on the right. However from the figure it is clear that this is not happening. The reason
is that shadows do not exist in VRML. The computational load to compute shadows is too
heavy to display 3D graphics on the fly.
So how does one create shadows? You could create them manually, for instance in the figure
above the right sphere could be defined darker than the left sphere simulating shadowing,
however this approach is not realistic for anything but very small models. The only approach
to block light is to define the objects which are not supposed to be lit by a particular light
outside the group where the light is defined, however, as mentioned before, grouping nodes
only have effect on Directional Lights.
Basically there is no way out, you're stuck with a shadowless world, unless you're a real
perfectionist you're up to some real hard work.

Lighting Flat Surfaces


Consider the following figure. In it a Spot Light is aimed at the center of a cubic Box and the
cube's face is inside the cone of light defined by the Spot light.

So far so good, the cube's face is fully lit as expected. Now consider the next figure. In it a
spot light is also aimed at the cube's face but the light cone's intersction with the cube's face
is a circunference which is totally inside the cube's face, i.e. none of the vertices of the cube's
face are inside the cone of light.

The cube's face is totally dark, why? Let's see another figure which may shed some light into
this obscureness. In this next figure a spot light is aimed at the top right vertex of the cube.
The cone of light does not contain any other vertex of the cube's face.

Again something is wrong, there should be a circular lighter area in the cube, and instead
there is a linear one.
In VRML the light reflection of a flat surface is computed based on the average amount of
light which reaches each vertex of the face. So in the first figure we had all vertices equally lit
and the face is evenly lit. On the second figure none of the vertices were lit, therefore the face
was dark.
The third figure has only one vertex lit, the top right one. Light along the right edge of the face
is computed as the average between the top right vertex, which is lit, and the bottom right
vertex, which is not lit, therefore the face becomes progressivelly darker as we move from the
top right vertex to the bottom right vertex. Similar reasoning can be applied between the top
right vertex and the top left vertex, as well as between the top right vertex and the bottom left
vertex.
Now look at the next figure.

Now, things are getting better, OK you don't have a circular light inside the shape but at least
the light doesn't lit the whole shape. The trick is to define either an IndexedFaceSet or a flat
Elevation Grid. Using any of these shapes one can create a mesh, i.e. instead of defining a
flat face using just the outer vertices, a mesh is created using small faces to construct the
original larger face. As the faces which build up the mesh grow smaller, the light effect will get
closer to a circle. This process, while providing more realism in the light effects, does have
the disadvantage that more faces need to be drawn, therefore slowing down the display of the
world. There is no rule of thumb to say when a performance problem will occur, you just have
to try and use meshes of different granularity, i.e. varying the number of the small faces that
make up the mesh, and see how performance is affected.

Colored Lights
Color can be applied to lights as well as shapes. You can have blue lights, red lights, brown
lights, just pick a color. However there is a detail that you should be aware of.

In the example above there is a blue sphere and a Spot Light to its right, right? Well, almost.
There is also a red Spot light on the left side of the sphere and pointing at it. Yet no traces
can be seen from this red light, why? The answer is simple, a blue sphere can only reflect
blue light, and as there is absolutely no blue light in a red light the sphere remains black.
In the real world pointing a red light to a blue object wouldn't result in total darkness, but in the
real world there is nothing like true red light, and true blue objects. All lights and objects in the
so called real world have a mixture of colors. When we say that an object is blue we are
saying that its stronger color is blue, and not that it doesn't contain any amount of any other
color. In computer models true colors do exist though, so be aware.
One possible fixture to this 'problem' is to avoid defining true colors. For instance, instead of
defining blue as 0 0 1 in the RGB model which means true blue, define blue as 0.3 0.3 1, i.e.
all colors are present altough there is a predominant color. The following figure has a sphere
defined with this later 'blue'.

The sphere remains blue when lit by a white spot light, but now a tenuous effect from the red
light is visible. Note however that defining colored lights in this way doesn't work in the same
way. See for example the following figure in which the red light was replaced by a light whose
coloring is defined in RGB by 1 0.3 0.3.

The effect of this new red light is not red at all! This is because only the blue part of the red
light is reflected. The light effect from the light on the left is dimmer than the one from the right
because the later has more blue light than the former.

Shapes Unlit
If the material field in the Appearance node is NULL or not specified then the associated
geometry in the Shape node is not lit. If you don't want to specify a material you can always
define the material field as: material Material { }
In the following figure a Point light is placed above two Spheres. Note however that while the
left sphere is lit correctly, the right one is not affected by the point light.

The reason why the right sphere doesn't react to light is because the material field is
undefined in the Appearance node of the shape.

DirectionalLight Node
If you're new to lighting in VRML, there is a Lighting section in this tutorial which discusses
general features of lighting in VRML. In here only the aspects which are particular to
Directional Lights are discussed.
Directional Lights define a light source which is placed very far away from your world (don't
worry you don't have to specify a location for the light). The light rays when they reach the
world are parallel to a given direction). This field specifies a vector in 3D to which the light
rays are parallel. The following figure attempts to define graphically a Directional Light.

This light should only affect the nodes which are defined within the same group, i.e. objects
placed outside the group where the Directional Light is defined are not lit. I say should and not
must because not all browsers support this feature.
The following fields are present in this node:
 on specifies if the light is active. This is a boolean field.
 intensity has values between 0.0 and 1.0. Higher values specify stronger lights.
 ambientIntensity specifies how much this light contributes to the overall lighting .Values
must be between 0.0 and 1.0
 color is a RGB field to specify the color of the light..
 direction which specifies a vector in 3D. The light rays are parallel to this vector.
Syntax:
DirectionalLight {
on TRUE
intensity 1
ambientIntensity 0
color 1 1 1
direction 0 0 -1
}

Note: On the VRML world presented as example the sphere is placed at -2.5 0 0, the cone is
placed at the origin, and the cylinder is placed at 2.5 0 0. If you're not familiar with placing
shapes other than at the origin see the Transform node.

PointLight Node
If you're new to lighting in VRML, there is a Lighting section in this tutorial which discusses
general features of lighting in VRML. In here only the aspects which are particular to Point
Lights are discussed.
Point Lights define a light source in a specified location. The light rays from this type of lights
go in all directions. This implies that, as opposed to Directional Lights, Point Lights have a
location but not a direction field. The following figure attempts to define a Point Light
graphically (the point where the light rays start is the light location.

This light lits all nodes regardless of their position in the file, i.e. this light is not scoped. There
is however a way of limiting the volume which is lit by this light, one can specify a radius
which defines the maximum distance that the light rays can travel. Objects which are further
away from the light source than the radius are not lit by the light source.
There is still another way to control the attenuation within the sphere defined by the radius.
Using the attenuation field one can specify how the light grows dimmer with distance, within
the sphere defined by the radius.
The following fields are present in this node:
 on specifies if the light is active. This is a boolean field.
 intensity has values between 0.0 and 1.0. Higher values specify stronger lights.
 ambientIntensity specifies how much this light contributes to the overall lighting .Values
must be between 0.0 and 1.0
 color is a RGB field to specify the color of the light..
 location which specifies a vector in 3D defining the coordinates of the light in your world.
 attenuation is a 3D vector that specifies how the light looses its intensity as distance from
the light source increases. All vector values must be greater than or equal to zero
 radius specifies the maximum distance for the light rays to travel. Must be greater than or
equal to zero.
Syntax:
PointLight {
on TRUE
intensity 1
ambientIntensity 0
color 1 1 1
location 0 0 0
attenuation 1 0 0
radius 100
}

Note: On the VRML world presented as example the sphere is placed at -2.5 0 0, the cone is
placed at the origin, and the cylinder is placed at 2.5 0 0. If you're not familiar with placing
shapes other than at the origin see the Transform node.

SpotLight Node
If you're new to lighting in VRML, there is a Lighting section in this tutorial which discusses
general features of lighting in VRML. In here only the aspects which are particular to Spot
Lights are discussed.
Spot Lights define a light source in a specified location pointed at a particular direction. The
light rays from this type of light are constrained to the interior of a cone, the cone's apex
coinciding with the light's location. The following figure attempts to define a Point Light
graphically (the point where the light rays start is the light location.

The cone of light is defined by two fields: cutOffAngle and beamWidth. The cutOffAngle
defines the angle of the cone, in radians. The beamWidth defines the angle of an inner cone
within which the light intensity is constant. The light rays which fall between the inner cone
and the outer cone have a decreasing intensity from the inner to the outer cone. If the
beamWidth is larger than the cutOffAngle then the light has a constant intensity within the
cone.
A Spot light lits all nodes regardless of their position in the file, i.e. this light is not scoped.
There is however a way of limiting the volume which is lit by this light, one can specify a
radius which defines the maximum distance that the light rays can travel. Objects which are
further away from the light source than the radius, or lie outside the outer cone are not lit by
the light source.
There is still another way to control the attenuation. Using the attenuation field one can
specify how the light grows dimmer with distance, within the sphere defined by the radius.
The following fields are present in this node:
 on specifies if the light is active. This is a boolean field.
 intensity has values between 0.0 and 1.0. Higher values specify stronger lights.
 ambientIntensity specifies how much this light contributes to the overall lighting .Values
must be between 0.0 and 1.0
 color is a RGB field to specify the color of the light.
 location which specifies a vector in 3D defining the coordinates of the light in your world.
 direction which specifies a vector in 3D defining the aim of the light.
 attenuation is a 3D vector that specifies how the light looses its intensity as distance from
the light source increases. All vector values must be greater than or equal to zero
 radius specifies the maximum distance for the light rays to travel. Must be greater than or
equal to zero.
 cutOffAngle specifies the cone within which the light rays are constrained. Must be greater
than or equal to zero, and less than or equal to 180 degrees, approximately 1.57 radians.
 beamWidth specifies an inner cone within which the light rays have a uniform intensity .
Must be greater than or equal to zero, and less than or equal to 180 degrees, approximately
1.57 radians.
Syntax:
SpotLight {
on TRUE
intensity 1
ambientIntensity 0
color 1 1 1
location 0 0 0
direction 0 0 0
attenuation 1 0 0
radius 100
cutOffAngle 0.78
beamWidth 1.57
}

Note: Due to the default location and direction of the Spot Light the initially loaded world is
black. On the VRML world presented as example the sphere is placed at -2.5 0 0, the cone is
placed at the origin, and the cylinder is placed at 2.5 0 0. If you're not familiar with placing
shapes other than at the origin see the Transform node.
Grouping Nodes
A set of nodes can be defined as a group in VRML. The following grouping nodes are
avalable in VRML:
 Anchor: define a complex shape, built using a set of shapes, as a hyperlink to another
VRML world, to a HTML page, or to any other data that your browser can read.
 Billboard: specifying a set of nodes which are always turned to you regardeless of your
position.
 Collision: defining a set of nodes which the browser should be warned when collision
occurs.
 Group: defining a new node type composed by a set of nodes so that you can reuse it
later without repeating the entire set of nodes.
 Switch: defining a set of nodes in which at most one of the nodes is drawn.
 Transform: defining a new coordinate system so that objects can be placed in locations
other than the origin.
Grouping also can be used for scoping, i.e. some node types, when placed n a group, only
affect the other nodes within the same group. For instance, Directional Lights only affect
nodes within the same group (at least according to the official VRML 2.0 specification).
Sensor nodes are also constrained to the group their defined in.
You can place groups inside groups, creating a hierarchical structure of nodes. A group node
can have any number of child nodes inside the children field.
All grouping nodes accept the following events:
addChildren: which add a new node to the group
removeChildren: which removes a given node from the group
To generate one of these events you have to use Scripts.

Group Node
If you're not familiar to grouping nodes in VRML see the section Creating Hierarchical Node
Structures for general information on grouping.
The group node lets you treat a set of nodes as a single entity.
The following fields are present:
 children which contains all the nodes included in the group.
 bboxCenter specifies the center of a box that encloses the nodes in the group. The value
for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the group. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. If the children nodes do not fit inside the box
defined the results are undefined. The latter two fields are optional. They can be used by the
browser for optimization purposes.
Syntax:
Group {
children []
bboxCenter 0 0 0
bboxSize -1 -1 -1
}

The following hierarchical structure of a VRML file exemplifies scoping with the group node:
Group g1 {
Shape A
Directional Light 1
Group g2 {
DirectionalLight 2
Shape B
}
}
The directional light in group 1 will lit both shapes A and B because a directional light lits all
nodes within the same group and in descendent groups. However, the directional light in
group 2 will only lit shape B. Shape A is placed outside the group where the light is defined
and therefore it is not lit by the directional light in group 2.
The following code exemplifies the syntax of the Group node:
Example:
#VRML V2.0 utf8
Group {
children [
Shape {
geometry Cylinder {
height 5.0
radius 0.5
}
}
Shape {
geometry Sphere {}
}
]
}

Transform Node
The transform node is a grouping node. As a group node it can be used to define a set of
nodes as a single object. However this is not the main purpose of this node. This node allows
you to define a new local coordinate system for the nodes within the group.
This node can be used to perform the following geometric transformations:
 Scale
 Rotation
 Translation
All the nodes inside a Transform group are affected by these transformations, i.e. all
coordinates are computed in the local coordinate system. Transform groups inside
Transforms groups accumulate the transformations specified in each Transform. The inner
Transform group defines a local coordinate system based on the coordinate system defined in
the outer transform group.
The following fields are present:
 children which contains all the nodes included in the group.
 scale specifies a 3D scaling transformation.
 scaleOrientation defines a rotation of the axes for the scaling operation.
 center defines the center of the scaling transform.
 rotation defines a rotation on an arbitrary axis.
 translation defines the origin of the local coordinate system.
 bboxCenter specifies the center of a box that encloses the nodes in the group. The value
for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the group. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. If the children nodes do not fit inside the box
defined the results are undefined.
Syntax:
Transform {
scale 1 1 1
scaleOrientation 0 0 1 0
center 0 0 0
rotation 0 0 1 0
translation 0 0 0
bboxCenter 0 0 0
bboxSize -1 -1 -1
children []
}
}

Scale
A scaling operation allows you to resize a shape. You can enlarge or decrease the size of a
shape in any number of dimensions. The scale factors must be positive. The next figure
presents an example.

The left box is a cube defined using the Box node without any scaling operation. The second
is scaled in the X axis by 0.2, the third is scaled in the Y axis also by 0.2, and the fourth is
scaled both in the Y and Z axes by 0.2.
Sometimes, you may wish that the axes to which the scaling operation is applied were not the
standard axes. scaleOrientation field allows you to rotate the axes, the scaling factors will be
applied to the rotated axes and not to the original axes. The scaleOrientation field specifies a
vector which defines the rotation axis, and the angle measure for rotation.

The cube from the right had a scale of 1.3 in the Y axis which was previously rotated by 45
degrees in the Z axis. The next figure shows the computations involved step by step.

The left figure shows the axes and a box prior to any geometric transformations. The second
shows the effect of the scaleOrientation, a rotation of 45 degrees in the Z axis. The
scaleOrientation used was 0 0 1 0.75, the first three values specify a vector, the fourth an
angle (recall that in VRML angles are defined in radians).
Note that after the third step the axes go back to normal, i.e. those in the left figure.
Because translation is the last transformation to be applied, it may sometimes be convenient
to define a translated set of axes for the scaling operation. Note that as for the
scaleOrientation, this translation is only effective for the scale transformation. The field center
allows you to perform such a translation.
The next figure shows a default box (in the middle) and two scaled boxes, the one on the right
using a center of (0,-1,0) and the left one without using center.

The effect of the center field in this case is that the final position of the scaled boxes is
different. Sometimes this can bring some advantage, if for instance you need to know where a
specific point of the shape will be after scaling. In the above example, the base of the shape
in the right after scaling remains at the same position as the original unscaled shape (in the
middle).
You can combine all three scaling related fields together, in this case the axes will be
translated to the center point, rotated according to the scaleOrientation, and finally scaled as
defined in the scale field. Please experiment in the right side frames at your will.

Rotation
A rotation is defined by a vector and an angle. The vector specifies the axis of rotation,
whereas the angle specifies the amount to rotate in a counter clockwise direction. The
following figure shows a dotted Box in its default position and the Box rotated 45 degrees in
the Z axis.

Rotations can be done in an arbitrary axis if desired. The following figure shows some
rotations applied to boxes.

The left box has no rotation applied to it, the second box has a rotation of 45 degrees in the Y
axis, the third is rotated in the X axis also by 45 degrees, the fourth is rotated 45 degrees
using a vector (1 1 0).
The center field explained earlier in the Scale section has the same effect in rotations.
In the above example a center of (2,2,0) was used, middle figure, and afterwards a rotation of
45 degrees was performed on the Z axis, right figure.

Translation
Translations allow you to place a shape wherever you want to. The following figure attempts
to depict the concept.

The dotted lines represent the coordinate system outside the Transform node. The full lines
represent the local coordinate system inside a Transform node which defines a translation as
specified by the arrow vector. The translation in the figure could be (1,1,0), i.e. the local
coordinate system in the Transform node would have has its origin the point (1,1,0) from the
coordinate system defined outside the Transform node.
You have probably already guessed that the two figures with a set of cubes had several
translations involved.
Composing Multiple Geometric Transformations
Up till now you've been playing with single transformations, but you can combine two or all
geometric transformations in a single transform node.
The transformations are independent of each other. For instance if you have both a
translation and an orientation in the same Transform, the translation is not affected by the
orientation and vice-versa.
In the above example if you did want the translation to occur in the rotated axis system then
you should define two nested Transform nodes, placing the orientation in the outter
Transform, and the translation in the inner Transform node.

Collision Node
If you're not familiar to grouping nodes in VRML see the section Creating Hierarchical Node
Structures for general information on grouping.
By default, all objects in the scene are collidable, i.e. you shouldn't be allowed to walk through
walls and all that stuff. This is the theory, in practice some browsers still allow you to play
ghosts.
So why do you need a collision node if collision is by default detected? Here are a few
reasons to use this node:
turn off collision: This may seem a dumb thing to do but it will provide better performance.
provide alternative representations for collision: collision detection is hard work for the
browser, complex worlds have lots of faces. This node allows you to provide an alternative
graphical representation for collision. If this alternative representation is simpler than the real
thing, then collision detection is easier to do. The alternative representation is not drawn, it is
only used for collision detection purposes.
do something when collision occurs: The Collision node outputs an event collideTime, which
outputs the time of collision. For example, this event can be routed to an AudioClip node to
play a sound when the user collides with an object. The example at the bottom of this page
shows the code to implement this.
The following fields are present:
 children which contains all the nodes included in the group.
 collide, a boolean field which specifies if the children nodes are eligible for collision
detection.
 proxy specifies an alternative geometric representation for collision detection. This field
takes as value any node, except those which can only appear inside another node as
AudioClip, MovieTexture, geometry nodes, etc...
 bboxCenter specifies the center of a box that encloses the nodes in the group. The value
for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the group. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. If the children nodes do not fit inside the box
defined the results are undefined. Detecting collision in a complex object
The latter two fields are optional. They can be used by the browser for optimization purposes.
Syntax:
Collision {
children [ ]
collide TRUE
proxy NULL
bboxCenter 0 0 0
bboxSize -1 -1 -1
}
In the following example, when the user collides with the sphere a sound will be heard.
Example:
#VRML V2.0 utf8
DEF col Collision {
children [
Sound { source DEF ac AudioClip { loop FALSE pitch 1.0 url "ouch.wav" } }
Shape {
appearance Appearance { material Material {}}
geometry Sphere {}
}
]
}
ROUTE col.collideTime TO ac.set_startTime

Anchor Node
If you're not familiar to grouping nodes in VRML see the section Creating Hierarchical Node
Structures for general information on grouping.
The Anchor node lets you define a set of objects as a link to a url. When you click in one of
the objects within an Anchor node the url will be fetched.
Anchor nodes can also be used to set a given Viewpoint. Examples will be provided bellow.
When the user has the mouse over an object contained in an Anchor node the url will be
displayed.
The following fields are present:
 children which contains all the nodes included in the group.
 url specifies the url to be fetched or a Viewpoint to become active. You can specify
multiple locations if you want to, the browser will look for data in those locations in decreasing
order of preference.
 parameter supplies additional information for the browser. For instance you can specify
the target window where the url should be displayed.
 description specifies a string which will replace the url information given to the user when
the mouse is over an object contained within an Anchor.
 bboxCenter specifies the center of a box that encloses the nodes in the group. The value
for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the group. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. If the children nodes do not fit inside the box
defined the results are undefined. The latter two fields are optional. They can be used by the
browser for optimization purposes.

Syntax:
Anchor {
children [ ]
url [ ]
parameter
description ""
bboxCenter 0 0 0
bboxSize -1 -1 -1
}

Example:
Anchor {
children [ Shape { geometry Sphere { } }]
url "http://www.my_server.pt/my_world.wrl"
description "My World"
parameter ["target=my_frame" ]
}

In the above example when the mouse is over an object contained in the Anchor node a
prompt will be displayed with the message "My World" (If description was absent the prompt
would display the url specified). When the user clicks an object contained in the Anchor node
the specified url is fetched and displayed in the frame named "my_frame".
The next example shows an Anchor which is linked to a Viewpoint. When the mouse is over
an object contained in the Anchor node a prompt will be displayed with the message "My
Point of View". When the user clicks an object contained in the Anchor node the specified
Viewpoint becomes active.
Example:
Anchor {
children [ Shape { geometry Sphere { } } ]
url #my_viewpoint
description "My Point of View"

BillBoard Node
Billboard is a special grouping node. All children nodes will turn to face you, as a sunflower to
turns to face the sun as it moves. The children nodes will rotate in a user defined axis.
The following fields are present:
 children which contains all the nodes included in the group.
 axisOfRotation specifies a 3D vector which will be used for the rotation. If a null vector is
specified, (0,0,0), then the object rotates to always face the user.
 bboxCenter specifies the center of a box that encloses the nodes in the group. The value
for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the group. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. If the children nodes do not fit inside the box
defined the results are undefined. The latter two fields are optional. They can be used by the
browser for optimization purposes.
Syntax:
Billboard {
children [ ]
axisOfRotation 0 1 0
bboxCenter 0 0 0
bboxSize -1 -1 -1
}

If an axisOfRotation different from the null vector is specified then there is one case where
results are undefined, this occurs when the user is aligned with the axis of rotation.

Switch Node
If you're not familiar to grouping nodes in VRML see the section Creating Hierarchical Node
Structures for general information on grouping.
The Switch node is a grouping node with a difference. At most one of the children nodes are
drawn. The whichChoice field specifies the index of which child is to be drawn. If whichChoice
is -1 then none of the children are drawn.
Why do you want a grouping node which draws at most one of its children? In fact it is a very
useful node as you shall see.
One possible use for this node is to have different versions of a given shape inside the Switch
node. Setting whichChoice provides a quick way to change between these objects. Because
this field is an exposed field it can receive events which can change the child to be drawn.
Another good use of this node is to have node definitions without drawing them, whichChoice
set to -1. Using this approach the node definitions would be together in the file simplifying
your life when you try to find a node's definition.
Note that the whichChoice determines only which child is drawn, however non-graphic nodes
still take effect when present in a Switch node. For instance a TimeSensor node will generate
events regardless of the value of whichChoice. In fact all nodes will generate and are able to
receive events regardless of the whichChoice value.
Syntax:
Switch {
whichChoice -1
choice [ ]
}

Inline Node
In VRML you can split your world over a set of files. This simplifies world design, and lets you
reuse world parts in many worlds. For instance you can have a set of shapes that draw a door
in a file door.wrl and use the file in a house, contained in a file house.wrl.
The Inline node lets you specify a url where the data can be retrieved from. The url must
contain a valid and complete VRML file, header included.
The following fields are present:
 url which lets you specify the location of the file to be included. You can specify multiple
locations if you want to, the browser will look for data in those locations in decreasing order of
preference.
 bboxCenter specifies where the center of a box that encloses the nodes in the inlined file.
The value for this field is a 3D point.
 bboxSize specifies where the size of a box that encloses the nodes in the inlined file. By
default this field has a value of -1 -1 -1, which implies that no box is defined. The values for
this field must be greater than or equal to zero. The latter two fields are optional. They can be
used by the browser for optimization purposes.
Note that any node definitions using DEF which occur inside the inlined file are not visible
outside the inlined file.
Syntax:
Inline {
url []
bboxCenter 0 0 0
bboxSize -1 -1 -1
}

Defining and Instancing Nodes


VRML allows you to define a a set of nodes, or a node with particular field values, as a new
node type. Suppose you want to draw a set of shapes, all having the same Appearance node.
There are two ways of doing this: the hard way, which requires the Appearance node to be
repeated for each shape; or the easy way in which you define the common Appearance node
as being a new node type.
The latter approach is not only easier, since you don't have to rewrite the defined node over
and over again, but it also guarantees that the shapes share the same Appearance node.
This has clear advantages when, for instance, you decide that the defined node should be
altered. Defining the node once and using it multiple times you have to change only the node
which is being defined instead of changing all occurrences of the defined node.
Two keywords are provided: DEF and USE.
Syntax:
{
DEF name node
}

where node is any of the VRML nodes, and name is the new node identifier.
Syntax:
{
USE name
}

where name is an identifier which has been previously defined using DEF.
Note: When defining nodes in a inlined file the defined identifier can only be instanciated
inside the inlined file, i.e. the identifier is not recognized outside the file where it is defined.
Example: suppose that you want two red Shapes in your world. The VRML code to define this
world could be something like:
Example:
Shape { appearance
DEF common_appearance Appearance {
material Material {diffuseColor 1 0 0}
}
geometry Sphere { }
}
Transform {
translation -2 0 0
children [
Shape {
appearance USE common_appearance
geometry Cone { }
}
]
}

The result of the above code is:


LOD Node
LOD stands for Level of Detail. This node lets you specify alternative representations of a
graphical object, and the distance ranges to use each representation.

The following fields are defined:

 level specifies the set of alternative representations of a graphical object


 center specifies a 3D point to which the distance is computed.
 range specifies a set of distances, floating point values greater than or equal to zero.
There must N distances specified, if N+1 levels are specified. The distances must be ordered
starting from the smaller distance, otherwise the results are undefined. If this field is left empty
then this is a hint to the browser telling it that it should select a level where a constant display
rate can be accomplished.

Syntax:
LOD {
level []
center 0 0 0
range []
}

This node can have a major impact on the performance of a browser, therefore its role can't
be overstated. Look at the following figure

In it you see three cones. The left one which isn't really a cone, more like a pyramid, has only
four sides, the middle one has 8 sides, the right one has 16 sides. These cones were created
using Extrusionnodes.

You can see that they are different by the way they reflect light. Now look at the next figure.

The above figure includes the same three cones but placed further away from the user. It is
becoming difficult to distinguish between the right and middle cones, although you can still
clearly see the difference between the left cone and the other two.

In the above figure the user is further away from the cones than in the previous figure and as
a result it is harder to distinguish which is the more detailed, and the less detailed cone.
This sequence of images shows that with distance the perception of the details in an object
becomes harder. This is the reasoning behind the LOD node. Why draw very complex shapes
when they are too far away from the user for details to be recognized? The more complex a
shape is the more demanding is the task of drawing it. If an object is far away there is no
benefit in drawing the object full of detail.

The LOD node can be used in the following way:

 if the user is close to the object draw the most detailed version
 when the user is not close anymore, but still not to far away draw a less detailed version
 when the user is very far away draw only a crude version of the object. By selecting less
detailed versions when the user is not close to a object, time is saved and the user perceives
no difference due to distance. You can specify as many levels of detail as desired. The user
should try to keep changes from one level of detail to the next as small as possible so that a
performance break will not occur.

The LOD can also be used to avoid drawing objects which are invisible, for instance in
another room. In this case one can specify an empty object, for instance a Shape without a
geometry.

The range field specifies which version of the object is drawn. The objects should be specified
in the level field by decreasing level of detail. If the distance from the user to the object is
smaller than the first range specified, then the first version, the more detailed, will be drawn, if
the distance is between the J-1 and J ranges, then the Jth version is drawn. If the distance is
greater than the last range specified, with an index I, then the Ith version is drawn.

The main problem is knowing how much is "close", "not to far away", and "very far away".
This is perhaps something the browser developers should think about, providing coordinates
for the user's position.

If browsers did provide the user's position then one could have the several versions with
different levels of detail of the object drawn at the origin, then the user would move away from
the objects until no difference was perceived between the full detailed version and a slightly
less detailed version. The user's position could then be used to compute the distance when
the level of detail should be changed. This procedure would be repeated for all levels of detail
until all versions looked the same.

Unfortunately most browsers do not provide the user's position so it becomes a matter of trial
and error.

Source code exemplifying the use of a LOD node:

Example:
#VRML V2.0 utf8

LOD {
range [20,40]
level [
#full detail 16 sided cone
Shape{
appearance Appearance {
material Material {
diffuseColor 1.0 1.0 1.0
}
}
geometry Extrusion{
crossSection [ -1 0, 0 0, -1 -2 -1 0]
spine [1 0 0 , 0.866 0 0.5,
0.5 0 0.866, 0 0 1 ,
-0.5 0 0.866, -0.866 0 0.5,
-1 0 0, -0.866 0 -0.5,
-0.5 0 -0.866, 0 0 -1 ,
0.5 0 -0.866, 0.866 0 -0.5,
100
]
}
}
#intermediate detail 8 sided cone
Shape{
appearance Appearance {
material Material {
diffuseColor 1.0 1.0 1.0
}
}
geometry Extrusion{
crossSection [ -1 0, 0 0, -1 -2 -1 0]
spine [1 0 0 , 0.707 0 0.707 ,
0 0 1 , -0.707 0 0.707,
-1 0 0,-0.707
0 -0.707, 0 0 -1 ,
0.707 0 -0.707, 1 0 0
]
}
}
#low detail 4 sided cone
Shape{
appearance Appearance {
material Material {
diffuseColor 1.0 1.0 1.0
}
}
geometry Extrusion{
crossSection [ -1 0, 0 0, -1 -2 -1 0]
spine [1 0 0 , 0 0 1, -1 0 0,
0 0 -1 , 1 0 0
]
}
}
]

Events
The main difference between VRML 1.0 and VRML 2.0 is that every node can send and
receive events. Yes, in VRML 2.0 you can have animated objects that react to users actions.
Events output values depending on their data type.
In VRML 2.0 when some fields of a Node are changed after the world has been loaded an
event is generated. You may have noticed that some fields in the syntax boxes, present in the
tutorial for each node, where written in bold, well those fields are exposed fields, i.e. they can
be set when the user is viewing the world.
An exposed field can be decomposed into the following three lines:
eventIn set_fieldName
field fieldName
eventOut fieldName_changed
where fieldName can be any exposed field, i.e. fields written in bold in the syntax boxes
present in the tutorial for each node.
An exposed field declaration implies that the node is able to receive events, eventIn, and
generate events, eventOut. eventIn in an exposed field is used to set the field's value. When
the field's value is changed, the node in which the field is defined will generate the respective
eventOut.
Besides exposed fields, nodes can have other eventIn or eventOut fields, however these
fields are not present in the node's definition in a VRML file, i.e. you don't write them when
defining the node.
Some nodes have fields eventIn defined for some fields but not eventOut. These fields are
presented in the syntax boxes in italic. The eventIns defined are set_name, where name is
the fields name for those fields in italics.
When an event occurs, the node which generated the event outputs a value or set of values
of a given data type depending on the type of event. When one sends an event to a node,
one is sending a value or set of values to that node. The node determines what the event
should do with the value or values provided. In the VRML nodes, when a node receives an
event it alters one field, the field specified by the event.
How do I send an event to a node and how do I catch an event generated by a node? There
are two ways of doing this, using ROUTES and scripts.
The next question is: when are events generated, other than those which result in a field's
value being changed? There are several possibilities:
 Timers which generate events at regular intervals
 Touch Sensors which generate events when the cursor is over objects within the same
group as the sensor.
 Visibility Sensors which generate events when shapes within the same group as the
sensor are visible to the user.
 Dragging Sensors which generate events when the user clicks the mouse and drags
objects within the same group as the sensor.
 Proximity Sensors which generate events when the user is within a predefined box.

ROUTES
In the section events it was mentioned that a node can send and receive events.
ROUTES are a simple way of defining a path between an event generated by a node and a
node receiving an event. The syntax is:
ROUTE Node.eventOut_changed TO Node.set_eventIn
Note: because all exposed fields define implicitly an eventIn and an eventOut fields, you don't
have to write the prefix 'set_' or the suffix '_changed'. The following syntax is valid in VRML as
long as the eventIn and the eventOut which appear in the ROUTE sentence belong to
exposed fields.
ROUTE Node.eventOut TO Node.eventIn
Because every time an exposed field is changed an event is generated it is possible to have a
cascade of events being generated. A time stamp is given to each event which is generated,
the same time stamp being given for all events in a cascade as if all events in a cascade
occurred simultaneously.
Now there is something missing isn't it? In the section events it was mentioned the eventIn of
an exposed field was used to set the value of the respective field. However looking at the
syntax of ROUTE there is no explicit declaration of what the new value is. As mentioned
before in this section "ROUTES are a SIMPLE way of defining a path...", and SIMPLE in this
case has some limitations. The new value for the field associated with the eventIn is the value
of the field which caused the eventOut to be generated.
Big deal, so what can I do with routes if I can't specify the value for the field that I'm about to
set? Well, you actually can do that using a middle man, see Interpolators.
A cascade of events may result in an event being generated more than once in the same
cascade, this can cause a loop of events. In VRML loops are not allowed, an event shall only
be generated once in each cascade of events. You don't have to worry about this, the
browser will disable any event which is repeated with the same time stamp.
It is possible that two different generated events in a cascade are linked with a ROUTE to an
eventIn. Results are undefined in this case. You should try to avoid this type of situations.

TimeSensor Node
The TimeSensor node is a clock which generates events as time goes by. The events can be
used to perform animation for instance. This node has the following fields:
 enabled which specifies the status of the sensor
 startTime which specifies when the TimeSensor starts to generate events. The value of
this field is the number of seconds since midnight, January the first, 1970
 stopTime which specifies when the TimeSensor stops to generate events. The value of
this field is the number of seconds since midnight, January the first, 1970
 cycleInterval specifies the number of seconds that the TimeSensor will generate events
 loop specifies if when the cycleInterval is over the TimeSensor should be restarted. Note:
In VRML time is counted from midnight, January the first, 1970. Some say that the reason for
choosing this date as the beginning of time as to do with the birth of the Unix system.
When the TimeSensor is enabled it will start ticking when the startTime is achieved (a value
of 0 means that the TimeSensor will start generating events as soon as the world is loaded).
When enabled the timer will continuously, meaning as often as possible, output an event time,
with the current time. After that the TimeSensor will stop generating events when either:
 The stopTime or
 startTime + cycleInterval is achieved. In the later case, if loop is TRUE, the TimeSensor
will start generating events again.
If using the cycleInterval, the TimeSensor will output an event fraction_changed . The value of
this event is between 0.0 and 1.0 and represents the fraction of the cycleInterval elapsed. If
loop is TRUE then when the cycleInterval is over the fraction_changed starts over from 0.0,
see Interpolators for examples on using TimeSensors.
The number of times the events will be generated is dependent on the speed of your
machine, your browser, the number of applications you're running, etc.
TimeSensors output one more event: isActive. This event will have the value TRUE when the
clock starts ticking, and FALSE when the clock stops ticking.
Note that setting loop to FALSE will stop the TimeSensor after startTime + cycleInterval
seconds from midnight, January the first, 1970. This implies that you have to compute the
number of seconds from that date until the time you want to start your TimeSensor. Hard job,
isn't it?
So how do you set a TimeSensor which will output events for only a cycleInterval, i.e. without
using loop, when for instance the user clicks a shape?
The solution is to send an event set_startTime with the current time to the TimeSensor. The
problem with this approach is how to compute the current time. Fortunately all sensors in
VRML generate events which output a time value when they become active. So basically all
you have to do is to route the event generated by the sensor when it becomes active to the
eventIn of the TimeSensor set_startTime.
See the examples provided with Interpolators and Sensors (see the Index for a complete list).
All fields are optional, the default values being applied if the field is not specified.
Syntax:
TimeSensor {
cycleInterval 1.0
enabled TRUE
loop FALSE
startTime 0
stopTime 0
}

TouchSensor Node
The TouchSensor node is a way of providing interactivity with the user. This sensor is usually
defined in a group and affects all shapes within that group. The sensor reacts when the user
has the mouse over and when the user clicks a shape contained in the group.
This node has a single field which specifies if the sensor is enabled or not.
The isOver event is generated with the value TRUE by this node when the field is enabled
and the mouse moves from a position where it is not over a shape contained within the group
to a position where it is over a shape. A value FALSE is provided by this event when the
sensor is enabled and the mouse stops being over a shape within the group.
If the shapes are not visible, i.e. they are hidden by other surfaces, then the touch sensor
should not generate the isOver event.
When the mouse is over a shape within the same group as a TouchSensor, then the following
events are generated when the mouse moves:
 hitPoint_changed: Provides the 3D position on the surface of the shape in the
TouchSensor's group coordinate system.
 hitNormal_changed and hitTextCoord_changed: provide respectively the surface normal
vector and the texture coordinates of the surface at the hitPoint.
When the user presses the mouse button the TouchSensor will also generate the event
isActive with the value TRUE when the user clicks the mouse, if over a shape and enabled.
When the user releases the mouse button an isActive event is generated with the value
FALSE plus the event touchTime with the current time.
Syntax:
TouchSensor {
enabled TRUE
}

When defining several TouchSensor inside nested groups, only the lowest sensor will
generate events. For instance consider the following excerpt of code:
Example:
DEF ga Group {
children [
DEF sa Shape ...
DEF ta TouchSensor {}
DEF gb Group {
children [
DEF sb Shape ...
DEF tb TouchSensor {}
]
}
]
}

When the user clicks over Shape sa, then TouchSensor ta will generate events, when the
user clicks on Shape sb only the TouchSensor tb will generate events.
Example of a TouchSensor to play a sound:
The following source code describes a group with a Shape, a TouchSensor, and a Sound.
Example:

#VRML V2.0 utf8


Group {
children [
DEF ts TouchSensor { }
Sound {
source DEF ac AudioClip {
loop FALSE
url "sfx.mid"
}
}
Shape {
appearance Appearance {
material Material {}
}
geometry Sphere{}
}
]
}

To play the sound when the user clicks the shape the following route should be used:
ROUTE ts.touchTime TO ac.set_startTime
The sound will play once because the loop field of the AudioClip is set to FALSE.
Another possibility is to have the sound playing whenever the mouse is over the
shape. This can be achieved by the following route:
ROUTE ts.isOver TO ac.set_loop
This will cause loop to become TRUE when the TouchSensor output an isOver event
with the value TRUE. When the user is no longer over the shape the isOver event
from the TouchSensor will output FALSE therefore stopping the sound as soon as the
sound's duration/pitch or stopTime are achieved.

VisibilitySensor Node
The VisibilitySensor node is used to detect visibility changes in a virtual box, generating
events when the visibility status changes. This sensor does not relate to the shapes defined
within the same group, i.e. it does not detect if a shape within a group is visible or not. The
Visibility sensor does not detect if the box is hidden from view due to other shapes in the
scene. The sensor behaves as if there were no other shapes being drawn.
The following fields are present:
 center: The center of the rectangular box
 size: The dimensions of the rectangular box
 enabled: determines the status of the sensor.
The following events are generated by this sensor:
 enterTime: outputs the time when the box becomes visible
 exitTime: outputs the time when the box stops being visible
 isActive: outputs TRUE when the box becomes visible, and FALSE when the box
becomes invisible
Syntax:
VisibilitySensor {
enabled TRUE
center 0 0 0
size 0 0 0
}
When defining several VisibilitySensor inside nested groups, all the sensors will generate
events when the respective boxes are visible.
Example of a VisibilitySensor to play a sound. When the Shape becomes visible you should
hear a sound.
The following source code describes a group with a Visibility Sensor, a Shape and a Sound.
Example:
#VRML V2.0 utf8 Group {
children [
DEF vs VisibilitySensor { size 1 1 1}
Sound { source DEF ac AudioClip { loop FALSE url "sfx.mid" } }
Shape { geometry Box { size 1 1 1 }
]
} ROUTE vs.enterTime TO ac.set_startTime

Dragging Sensors
Dragging sensors are a special kind of sensors that not only track users motion but also move
the objects within the same group as the sensor. There are three type of dragging sensors:
 PlaneSensor: lets the user move objects in the XY plane.
 CylinderSensor: Maps the movement to the surface of a conceptual cylinder.
 SphereSensor: Maps the movement to the surface of a conceptual sphere. The above
sensors all share the following fields:
 enabled defines the status of the sensor
 offset indicates the initial position of the shapes within the group, a zero offset will mean
that the shapes will be moved from their original position, whereas an offset different from
zero will indicate that dragging starts at the original position plus the specified offset. The
offset value is ignored if autoOffset is TRUE. Note that the type of the offset field varies with
the type of the sensor.
 autoOffset specifies if the browser should track the current position or do all dragging
operations relative to the original position. Only relevant for the second and subsequent
draggings. If autoOffset is TRUE then the second dragging will start where the first one
ended, if FALSE then the shapes will return to their original position each time a new dragging
operation begins. The following events are common to all the sensors:
 isActive indicates whether a dragging operation is being done. The isActive will output
TRUE if the user has the mouse pressed over a shape within the same group as the sensor,
and FALSE otherwise.
 trackPoint_changed provides the actual coordinates in the surface defined by the sensor
 rotation_changed (SphereSensor and CylinderSensor) and translation_changed
(PlaneSensor) provides the relative orientation or translation being made. In order to actually
move the shapes you should place the shapes inside a Transform node. The Transform node
should be in the same group as the sensor. You then need to route this events to fields in a
Transform group. See the examples provided for each sensor.
If using multiple sensors in the same group it is up to you to specify which does what, they will
all generate events when any of the shapes within the group is affected.
If using multiple drag sensors in nested groups then the inner group sensors grab the users
action and the outer group sensors will ignore it.

PlaneSensor Node
The PlaneSensor node maps the mouse movement into the XY plane, moving the shape in
the XY plane of its local coordinate system. See Dragging Sensors for more information on
this type of sensors. This node allows you to limit the draggin operation to a rectangular area.
In addition to the fields which are common to all dragging sensors, this node has the following
fields:
maxPosition which specifies the maximum X and Y.
minPosition which specifies the minimum X and Y.
Note: if maxPosition is lower than minPosition for an axis, then the movement is not limited for
that axis. By default the movement is not limited in neither X or Y.
Syntax:
PlaneSensor {
enabled TRUE
offset 0 0 0
autoOffset TRUE
maxPosition -1 -1
minPosition 0 0
}

In addition to the exposed fields presented in the syntax the CylinderSensor node generates
the following events (see Dragging Sensors for a description of their meaning):
 isActive (boolean)
 translation_changed (3D vector)
 trackPoint_changed (3D point) Example: Using a PlaneSensor to move a Sphere in a
rectangular area defined by (-1,-1), (1,1).
First one needs to create a group node which will include both the sensor and a Transform
node containg a Sphere geometry.
Example:
#VRML V2.0 utf8
Group {
children [ DEF ts PlaneSensor {
minPosition -1 -1
maxPosition 1 1 }
DEF tr Transform {
children Shape {geometry Cylinder {}} }
]
}

Now we need to create a route between the eventOut translation_changed from the
PlaneSensor to the exposed field translation of the Transform node. The route to achieve this
is:
ROUTE ts.translation_changed TO tr.set_translation
Note: On the VRML example provided the axes are not inside the same group as the sensor.

SphereSensor Node
The SphereSensor node maps the mouse movement into a surface of a conceptual sphere,
rotating the shape about the center of its local coordinate system. See Dragging Sensors for
more information on this type of sensors.
Syntax:
SphereSensor {
enabled TRUE
offset 0 1 0 0
autoOffset TRUE
}
In addition to the exposed fields presented in the syntax the SphereSensor node generates
the following events (see Dragging Sensors for a description of their meaning):
 isActive (boolean)
 rotation_changed (3D vector plus angle)
 trackPoint_changed (3D point) Example: Using a SphereSensor to rotate a Box.
First one needs to create a group node which will include both the sensor and a Transform
node containg a Box geometry.
Example:
#VRML V2.0 utf8
Group {
children [ DEF ss SphereSensor {} DEF tr Transform {
children Shape {geometry Box {}} }

Now we need to create a route between the eventOut rotation_changed from the
SphereSensor to the exposed field rotation of the Transform node. The route to achieve this
is:
ROUTE ss.rotation_changed TO tr.set_rotation
Note: On the VRML example provided the axes are not inside the same group as the sensor.

CylinderSensor Node
The CylinderSensor node maps the mouse movement into the a conceptual cylinder, rotating
the shapes in the Y axis of its local coordinate system. See Dragging Sensors for more
information on this type of sensors. This node allows you to limit the draggin operation
between two angles. The offset field is relative to the X axis.
In addition to the fields which are common to all dragging sensors, this node has the following
fields:
maxAngle which specifies the maximum rotation.
minAngle which specifies the minimum rotation.
diskAngle: I still haven't found how this field affects the behaviour of the node
Note: if maxAngle is smaller than minAngle, then the rotation is not limited.. By default the
rotation is not limited.
Syntax:
PlaneSensor {
enabled TRUE
offset 0
autoOffset TRUE
maxAngle -1
minAngle 0
diskAngle 0.262
}

In addition to the exposed fields presented in the syntax the CylinderSensor node generates
the following events (see Dragging Sensors for a description of their meaning):
 isActive (boolean)
 rotation_changed (3D vector plus an angle, the 3D vector is the Y axis of the local
coordinate system)
 trackPoint_changed (3D point) The following example uses a CylinderSensor to rotate a
Cone limited between 0 and 1.57 (+- 90 degrees).
First one needs to create a group node which will include both the sensor and a Transform
node containg a Cone geometry.
Example:
#VRML V2.0 utf8
Group {
children [ DEF cs CylinderSensor {
minAngle 0
maxAngle 1.57 }
DEF tr Transform {
children Shape {geometry Cylinder {}} }
]
}

Now we need to create a route between the eventOut rotation_changed from the
CylinderSensor to the exposed field rotation of the Transform node. The route to achieve this
is:
ROUTE cs.rotation_changed TO tr.set_rotation
Note: On the VRML example provided the axes are not inside the same group as the
sensor.

ProximitySensor Node
The ProximitySensor node is a way of providing interactivity with the user. The sensor
generates events when the user enters, leaves or moves in a defined rectangular box. This
sensor does not relate to the shapes defined within the same group, i.e. it does not detect if a
shape within a group is close to the user or not.
This node has the following fields:
 enabled specifies the status of the sensor;
 center determines the center of the rectangular box;
 size specifies the size of the box.
The isActive event is generated with the value TRUE by this node when the field is enabled
and the mouse moves from a position outside the box to a position inside the box. A value
FALSE is provided by this event when the sensor is enabled and the mouse leaves the box.
An event enterTime is generated when the user enters the box, the event exitTime is
generated when the user leaves the box.
When the mouse is inside, enters or leaves the box, the following events are generated when
the mouse moves:
 position_changed: Provides the 3D position of the user in the sensor's coordinate system.
 orientation_changed: provides the users orientation.
Syntax:
ProximitySensor {
enabled TRUE
center 0 0 0
size 0 0 0
}

When defining several ProximitySensor inside nested groups, all the sensors will generate
events when the user is inside the respective boxes.
Example of a ProximitySensor to turn on and off a light. The light is turned on as soon as the
user enters the Box, and off when the user leaves the box:
The following source code describes a group with a Visibility Sensor and a Sound.
Example:
#VRML V2.0 utf8
Group {
children [
DEF ps ProximitySensor { size 4 4 4}
DEF sl SpotLight { on FALSE location 0 0 4}
Shape { appearance Appearance {material Material {}}
geometry Sphere {}
}
]
}
ROUTE ps.isActive TO sl.set_on

ProximitySensor Example
Many people have asked how to have a shape, or group of shapes, keeping its relative
position to the user when the user is moving. This example shows you how to do that.
ProximitySensor nodes can be used to keep track of the users position and orientation. Two
eventOuts are provided for this effect:
 position_changed
 orientation_changed Therefore a ProximitySensor node generates events whenever the
user changes its position or orientation. These events can then be routed to a Transform
node where the shapes are placed.
The only problem with this method is that a ProximitySensor requires the definition of the size
of a virtual box. If the user is outside the virtual box then the ProximitySensor will NOT
generate events. To avoid this problem one can always define the size of the ProximitySensor
to be larger than the world itself.
The following code should do the trick:

Example:
#VRML V2.0 utf8
Group {
children [
DEF ps ProximitySensor {
center 0 0 0
size 1000 1000 1000
}
DEF tr Transform {
children
Transform {
translation 0 0 -5
children
Shape {geometry Sphere{}}
}
}
]
}
ROUTE ps.position_changed TO tr.set_translation
ROUTE ps.orientation_changed TO tr.set_rotation

Note that the sphere which is 'locked' to the user position is inside a Transform node which
contains a translation. This translation defines the relative position to the user, in this case the
center of the sphere will be 5 units away from the user.
Press the button below to see the VRML. Note that there is also a box in the world. The box
should remain in its position while you move, only the sphere will follow your movement.
Sound Node
VRML 2.0 supports not only 3D graphics but also 3D sound. Using the Sound node you can
not only provide a location for the sound source but also the spatial properties of its
propagation.
The following fields are present:
source specifies either an AudioClip node, for wave and midi sounds, or a MovieTexture
node, for mpeg sound.
location specifies the location of the sound source in the local coordinate system in which the
Sound node is included.
intensity defines the volume of the sound, must be between 0 and 1, where 0 corresponds to
silence and 1 to full volume.
direction indicates the direction in which the sound source is pointing.
priority provides a way to define the relevance of the sound. This field is useful if there are no
sound channels available, must be between 0 and 1. Higher values indicate higher priorities.
spatialize indicates if the sound should be treated as 3D or ambient sound. If TRUE the sound
is 3D else the sound is not localized, i.e. you can hear the sound in both the left and right
channels of your stereo regardless of your position.
minBack, minFront: specify an inner ellipsoid within which the sound is heard at the intensity
specified.
maxBack, maxFront: specify an outter ellipsoid. If the user is outside this ellipsoid the the
sound is not heard. When between the ellipsoids the sound is attenuated as one travels from
the inner ellipsoid to the outter ellipsoid.
Syntax:
Sound {
source NULL
location 0 0 0
intensity 1
priority 0
spatialize TRUE
minBack 1
minFront 1
maxBack 10
maxFront 10
}

AudioClip Node
This node specifies the location and properties of an audio source for the Sound node. The
file specified in the url must be in either a MIDI or WAVE format.
The following fields are present in this node:;
 loop specifies if the sound is to play repeatedly, see notes after field definition.;
 pitch specifies the speed at which the sound will play, for instance if pitch is 2 then the
sound will play twice as fast. Only positive values are allowed.
 startTime specifies the starting Time of the sound in seconds. The value of this field is the
number of seconds since midnight, January the first, 1970.;
 stopTime specifies the stopping time of the sound in seconds. The value of this field is the
number of seconds since midnight, January the first, 1970.;
 url which specifies the location of the sound. You can specify multiple locations if you want
to, the browser will look for data in those locations in decreasing order of preference.
 description is a string which can be used to describe the sound. The browser is not
required to display the string.; Notes:;
 In VRML the world was created at midnight, January the first, 1970. Some say that the
reason for choosing this date as the beginning of time as to do with the birth of the Unix
system.;
 If the loop is set to TRUE and startTime >= stopTime then the sound will play forever.
However if startTime < stopTime the sound will stop as soon as stopTime is reached.;
 If startTime >= stopTime then the sound should start as soon as startTime is reached.
Note that some browsers only start the sound when startTime > stopTime. This is because in
the early drafts of the VRML 2.0 specification this later condition was required to start the
sound.; All fields are optional, the default values being applied if the field is not specified.
Note: if you do not specify the location of the sound, url, then no sound will be played.
Syntax:
AudioClip {
loop FALSE
pitch 1.0
startTime 0
stopTime 0
url [ ]
description ""
}

The AudioClip node has two eventOut fields: duration_changed and isActive. A
duration_changed event is generated when the current url is changed, the value for this event
is the time in seconds of the sound. Note that changing the pitch does not produce a
duration_changed event. The event isActive with a value TRUE is generated when the sound
starts playing, when the sound stops the event isActive will output FALSE.
To play the sound in the example provided press the mouse button over the sphere. Note
that, in the example provided, once the sound has started you can't restart it by pressing the
sphere.

Bindable Nodes
Bindable nodes are a special type of node in the sense that only one of each can be active at
a certain type. Bindable nodes provide information about the environment and the user. The
following nodes are bindable nodes:
 Viewpoint: specifies the position of the user
 NavigationInfo: specifies features of the user
 Fog: adds atmosphere to the scene
 Background: provides a sky, and background images that can enhance dramatically the
scene.
When the scene is loaded the first of each of these nodes to be found becomes active, i.e. is
bound (nodes within inlined files do not count). The bound nodes are put on the top of a
stack. There is a stack for each type of node.
When a node is bound it generates the boolean event isBound with the value TRUE.
In order to bound a node, for instance to change the background, an event set_Bind with the
value TRUE should be routed to the node. The newly bounded node goes to the top of the
respective stack. The previously bound node sends an event isBound with value FALSE, and
the newly bound node sends an isBound event with value TRUE.
In order to unbind a node an event set_Bind with the value FALSE should be sent to the
respective node. The unbounded node sends an event isBound with value FALSE and is
pushed from the top of the stack. The node which is now in the top of the stack becomes the
active node, i.e. it becomes bounded, and it sends an event isBound with value TRUE.
If a node which is not bound but is in the stack receives an event set_Bind with the value
FALSE, the node is removed from the stack. If the node is not in the stack the event is
ignored.
NavigationInfo Node
The NavigationInfo node described the user and specifies the navigation model. This is a
bindable node.
The following fields are present in this node:
 avatarSize specifies the physical dimensions of the user. These dimensions are used for
collision detection purposes as well as terrain following. This field takes a list of three values.
The first value specifies the minimum distance that the user can be from any collidable
geometry, see the Collision node. The second value determines the height of the user, i.e. if
using a terrain following option provided in some browsers the user will be maintained at the
specified height. The third value determines the maximum height that the user can step over,
think of a staircase for instance, the value should be higher than the height of a single step.
 headlight is a boolean field which determines if the headlight is turned on or off, see
Lighting.
 visibilityLimit determines the maximum distance at which the user can see, the default
value of 0.0 indicates that this distance is infinite. Note that regardless of the distance set
you'll still see the background.
 speed indicates the velocity at which the user moves in meters per second, well ideally at
least.
 type defines the type of navigation for the user. Possible values are "WALK", "EXAMINE",
"FLY", and "NONE". Some browsers may provide more navigation modes but these four are
the 'official' ones.
In addition to the events defined explicitly by the exposed fields (in bold) this node supports
the events common to all bindable nodes.
Syntax:
NavigationInfo {
avatarSize [0.25, 1.6, 0.75]
headlight TRUE
speed 1.0
type "WALK"
visibilityLimit 0.0
}

In the example provided the user is placed at a height of 1.6 meters from the ground. The first
step has a height of 0.25 meters and the last two steps have a height of half a meter.

Viewpoint Node
The Viewpoint node specifies the user's location and viewing model parameters. This is a
bindable node.
The following fields are present:
 fieldOfView specifies an angle in radians. Small angles correspond to telephoto lenses,
whereas large angles (up to 3.14) are the equivalent of wide-angle lenses. Note that
perspective gets distorted with large values.
 position specifies the user position in the coordinate system which the Viewpoint is
defined.
 orientation determines the direction at which the user is looking, it specifies a rotation
relative to the default orientation which points along the Z axis in the negative direction.
 description provides a textual description of the ViewPoint. Most browsers have a list of
the Viewpoints found in their menus, the contents of that list is taken from the description
fields of the ViewPoints. Also you can direct the browser to go to a Viewpoint by specifying
the Viewpoint name after the URL, as in "my_world.wrl#my_viewpoint, where my_viewpoint is
the value for the field description for a Viewpoint in the file my_world.wrl.
 jump determines the Viewpoint transition when you change from the active Viewpoint to a
new Viewpoint. If jump is TRUE then the user should be moved along the path from the
current Viewpoint to the new Viewpoint, otherwise if jump is FALSE then the Viewpoint is
simply changed without affecting the user's position.
In addition to the events common to all bindable nodes, this node generates the event
bindTime with the current time when it receives the event set_bind.

Syntax:
Viewpoint {
fieldOfView 0.785398
position 0 0 10
orientation 0 0 1 0
description ""
jump TRUE
}

In the right example there are two Viewpoint defined, v1 and v2. The fields in the form relate
to Viewpoint v1. You can move between the Viewpoints to see the effect of the jump field. In
v2 jump is TRUE. v2 is placed at 0 -10 0 and looking at the origin.

Background Node
The background node provides a way to describe the horizon of your world. This is a bindable
node.
The background node allows you to define the sky, ground, and panorama images to add an
horizon to your world. All items on the background are placed as if infinitely far away from
you, i.e., you can never get closer to the background images.
Syntax:
Background {
skyColor [ 0 0 0 ]
skyAngle [ ]
groundColor [ ]
groundAngle [ ]
backUrl [ ]
bottomUrl [ ]
leftUrl [ ]
rightUrl [ ]
frontUrl [ ]
topUrl [ ]
}

The sky is defined as a infinitely large sphere placed around your world. You can define a
constant color for it, or have gradient effects.
The sky color is defined by two fields: skyColor and skyAngle. If you want a single color sky,
like the figure on the left above, then you specify a skyColor as the RGB of the desired color,
don't specify the skyAngle. The skyAngle is only used when a gradient effect, like the image
on the right above, is intended. If you want a gradient effect then you specify the color for the
upper pole of the sphere as the first color in the field skyColor. Next you specify at which
angle you want a new color (the angle is measured from the upper pole) in the skyAngle. The
second color in the skyColor field specifies this last color. The browser should create a
gradient between the first and second colors, starting at the upper pole and ending at the
angle specified in the skyAngle field. You can specify any number of colors and angles, the
number of angles should be the number of colors minus 1. The first color always corresponds
to the upper pole. For instance the following combinations were used to create the above
images:
Left Image
SkyAngle SkyColor
0 (upper pole) 001
Right Image
SkyAngle SkyColor
0 (upper Pole) 001
1.2 0 0 0.6
1.57 100
On the left image only one color was specified. In this case no angles are specified because
the first color is always the upper pole color. The Sky is all blue.
On the right image from 1.57, roughly 90 degrees, to the lower pole the color used is the last
color specified, i.e. red.
Similarly to the sky, the ground is also an infinitely large sphere. The ground sphere is placed
inside the sky sphere. The only difference between these two spheres is that in the ground
sphere if you do not specify a color, you can see through it, i.e. you can see the sky sphere.
Usually, for the ground sphere colors are only provided for the bottom hemisphere.
The following images provides a ground combined with a sky.

The ground color and angles used were:


groundAngle groundColor
3.14 (lower pole) 0.5 0.5 0
1.57 0.5 0.5 0
From 1.57 radians to the upper pole the ground is transparent because no color was
specified, therefore allowing you to see the sky sphere. Note that you must specify at least
two colors for the ground, otherwise there will be no ground. If you want a ground with a
constant color just specify twice the same color. Specifying only one color doesn't provide you
with a background.
You can place images on the sides, top and bottom of a conceptually infinitely large box
placed inside the ground sphere. Since the box is placed inside the sky and ground spheres,
in order to see through these you should use images with transparent parts.

The source code that follows was used to produce the background for the above image
Example:
Background {
skyAngle [ 1.2 1.57]
skyColor [0 0 1, 0 0 0.6, 1 0 0]
groundColor [0.5 0.5 0.0]
backUrl "back4.gif"
rightUrl "back4.gif"
leftUrl "back4.gif"
frontUrl "back4.gif"
}

The image used for the sides of the box is

The border presented in the image is only to give you the notion of the image size, it is not
part of the image. Black is the transparent color of this image. Looking at the image one can
see that the mountains start only at half of the image, they don't start from the bottom of the
image. This is because the desired effect was to have the mountains starting when the
ground was over and the sky started.

Fog Node
The Fog node can be used to add realism to your world. It provides atmosphere, creating a
mist or a heavy fog depending on the fields specified. This node is a bindable node.
The fields for this node are:
 color specifies the color of the fog
 fogType specifies how the fog's density increases with distance. Allowed values are
"LINEAR" and "EXPONENTIAL". "LINEAR" fog increases linearly with distance, this provides
some degree of depth perception, however the "EXPONENTIAL" fog provides more natural
results.
 visibilityRange defines a distance at which the objects are totally obscured by the fog.
Objects which are further away from the user than visibilityRange will have their color
changed to color. Objects closer to the user will have their color blended with the color
specified, the amount of blending is relative to the distance. A value of 0 means no fog.
Syntax:
Fog {
color 1 1 1
fogType "LINEAR"
visibilityRange 0
}

The position of this node does not affect its scope, i.e. the fog affects all shapes being drawn.
The Fog node does not affect the Background, i.e. regardeless of the Fog field values the
Background is always visible. To obtain a more realistic effect the Background should be the
same color as the Fog.
In the VRML example provided, the Spheres are placed ten meters apart in the Z axis and a
white Background is used.

WorldInfo Node
This node has no visual impact on your world and its use is restricted to documentation
purposes and giving a title to your world.

Syntax:
WorldInfo {
info [ ]
title ""
}

The title field is a string, and info is a list of strings.

Coordinate Node
This node appears inside the PointSet, IndexedLineSet and IndexedFaceSet.
This node has a single field which takes a list of 3D coordinates separated by spaces or
commas.
Syntax:
Coordinate {
point [ ]
}

Color Node
This node appears inside the PointSet, IndexedLineSet, IndexedFaceSet and ElevationGrid.
This node has a single field which takes a list of RGB values separated by spaces or
commas.
Syntax:
Color {
color [ ]
}

Normal Node
This node appears inside an IndexedFaceSet or ElevationGrid.
This node has a single field which takes a list of 3D vectors separated by spaces or commas.
Syntax:
Normal {
vector[ ] }
}

Normals are outside the scope of this tutorial in the present version. This node is included in
here only for completeness. Future versions of the tutorial may deal with normals.

Вам также может понравиться