Release Notes for Build 114

From C4 Engine Wiki
Jump to navigation Jump to search

Release date: January 9, 2006

  • Fog has been completely redesigned in this build. Fog is no longer a property of a zone, and any existing uses of the old fog property will disappear from worlds opened in this build. Fog is now controlled by placing a fog space node into the scene. The zone containing the fog space node is automatically fogged, and other zones may be linked to the fog space so that one fog space can be used to fog areas covered by more than one zone. The fog space itself defines a boundary plane between fogged and unfogged half-spaces. The half-space on the negative side of the plane contains fog, and the half-space on the positive side of the plane is clear. A single formula is used to handle the four possible configurations between a camera and an arbitrary point in the scene. (The camera can either by in the fog half-space or in the clear half-space, and a surface point can be on either side of the plane.) In all cases, the amount of fog applied to a fragment is determined by the distance along the direction from the surface point to the camera that actually lies on the fogged side of the plane. Fog is applied to everything, and it is integrated into the existing rendering passes—there is no post-processing calculation.
  • A world may contain multiple fog spaces, but only one can be active at any time for a particular camera view with defined results. It's okay for one fog space to be used in a zone directly visible to the primary camera and then for another fog space to be seen through a remote portal. This works because the image seen through the remote portal is rendered from a different camera.
  • Fog can be rendered in two modes. By default, fog is computed at each fragment to yield an accurate result regardless of the position of the fog plane or an object's degree of tessellation. This can have performance implications, however, so it is also possible to force the fog distance to be computed on a per-vertex basis. Doing this on Nvidia hardware is especially fast because it can take advantage of dedicated fog silicon that reduces the per-fragment cost to virtually zero (but this has limited precision). Since the interpolated fog distance for large polygons can get quite inaccurate far from the vertices, calculating fog distance per-vertex generally requires higher levels of tessellation. In this mode, it's also necessary to split polygons at the fog plane so that positive fog distances don't get extrapolated outside the fogged half-space. The geometry flags now contain a setting for splitting polygons automatically at the fog plane referenced by the geometry's owning zone.
  • A fog space is created in the editor using the fog space tool in the light/sound/space panel. Only the plane of the fog space matters—the rectangular size can be anything you want. Additional zones can be linked to the fog space by first making the fog space the current target (Ctrl-T), then selecting one or more zones, and finally choosing Link to Target (Ctrl-L). The zone actually containing the fog space does not need to be explicitly linked.
  • As a performance optimization, direct portals may have a flag set that prevents fog from being rendered through them. This is useful if the portal looks into a zone containing fog, but it is known that none of the fogged area can be seen from the other side of the portal.
  • Occlusion portals are now fully implemented. (These have been available in the editor, but were ignored by the visibility code in previous builds.) Also known as an antiportal, an occlusion portal prevents objects that are behind it from being rendered. An occlusion portal is not linked to a destination zone and is usually placed in the middle of a zone inside some large object that blocks a lot of things from view. Any geometry, light, portal, or effect that is completely blocked by an occlusion portal from the camera's perspective is not considered any further for rendering. In general, occlusion portals should be as large as possible so that culling is maximized, and the number of occlusion portals visible at any one time should be kept as low as possible to minimize CPU calculations. Note that occlusion portals may not always produce big speed improvements where you think they should because of good hardware z-culling.
  • Added the SetOcclusionProc() function to the Node class and modified the existing SetVisibilityProc() function. The procedure installed by the SetVisibilityProc() function now takes a pointer to a single ZoneRegion and must determine whether a node is visible within that region. The procedure installed by the SetOcclusionProc() function takes a pointer to a list of ZoneRegion objects and must determine of a node is occluded within any of them. See the documentation at for details. By default, a node's visibility and occlusion procedures check the node's bounding sphere against a region's bounding planes using the functions provided in the Region class. Several geometry primitives override this behavior to use tighter bounding volumes like boxes or cylinders.
  • Ordinarily, a game module only needs to worry about visibility and occlusion procedures for special effect classes derived from the Effect node. An effect needs to either provide a bounding sphere (by overriding the Node::CalculateBoundingSphere() function) or needs to install a visibility procedure and an occlusion procedure. The Render() function of an effect no longer needs to test for visibility and will only be called if the effect is enabled, visible, and unoccluded. See the documentation at
  • Improved the way in which multi-zone effects are handled. Because some special effects can span mutiple zones (notably particle systems like smoke trails), their eligibility for rendering can't be determined solely based on whether their owning zones are visible. Being a scene graph node, an effect node can only be in one zone at a time, but there is a mechanism that lets any number of other zones reference the effect so that it will be considered for rendering whenever those zones are visible. By default, an effect is referenced by the zone that owns the effect node. If the effect defines a bounding sphere (by overriding the Node::CalculateBoundingSphere() function), then any zone into which that sphere extends through portals also references the effect. This ensures, for instance, that particles from an explosion that fly into another zone are visible to someone who can't see the zone containing the effect node itself.
  • An effect may override the default zone-referencing behavior by disabling the post-bounding-sphere calculation. This is done by adding the following line to an effect's constructor or Preprocess() function:
   SetActiveUpdateFlags(GetActiveUpdateFlags() & ~kUpdatePostBounding);
  • If this is done, then the effect is responsible for explicitly adding references to zones from which the effect could be visible. The Effect::AddEffectReference() and Effect::RemoveEffectReference() functions are supplied to do this. It is often convenient to call these from within overridden EnterZone() and ExitZone() functions. For more details, see the documentation at
  • The Display Manager will now choose the highest refresh frequency available for any particular resolution when in fullscreen mode. Previously, it let the driver choose the frequency, but this seems to have always resulted in the somewhat low 60 Hz refresh rate.
  • Made various vertex program and fragment program improvements after observing how the driver was compiling them on both Nvidia and ATI hardware. The engine now contains some conditionally-compiled hooks that lets an external tool capture the actual hardware command buffer, from which it's possible to gain a lot of information about exactly what is being done on the hardware. (Sorry, this tool cannot be released.)
  • The command console now stores a small command history that can be accessed with the up and down arrows.
  • Changed what used to be called "focal length" for a spot light to "apex tangent". The term "focal length" was borrowed from the same property that a camera has in relation to its field of view, but it's a little confusing when it's applied to spot lights. The apex tangent t is the trigonometric tangent of half of the horizontal field of illumination. That is, at a distance t from the spot light's center in the direction that it's pointing, the pyramid of illumination has widened to one unit left and right.
  • A new tool is shown in the World Editor that will let you select objects by dragging out a box, but this is not yet implemented and will have no effect if used.
  • Fixed a small bug in the World Settings dialog that always caused the skybox glow enable box to be unchecked when the window is opened. (The skybox would still get the glow property if you checked the box and closed the window.) Also fixed a bug that would make it impossible to disable glow for a geometry once it was enabled.
  • Fixed a minor bug in which the brightness setting in the Graphics Options dialog was not being saved unless you changed the display resolution or fullscreen setting. Also made it so that changing only the brightness and then cancelling the dialog would restore the original brightness.
  • Fixed a bug that would generate a bad fragment program if specular color, microfacet color, and gloss map were all being used in a material.
  • Fixed a bug that prevented depth maps from being created in the previous build due to all of the shader system changes.