For participants only. Not for public distribution.

Note #29
Software design overview

John Nagle
Last revised July 22, 2003.

Here, in one place, is a summary of the system design.

Sensing the terrain

Range inputs from the laser rangefinder, video inputs from the camera, and movement inputs from the INS are combined to produce a viewable image showing ground contours and road-following information.

This subsystem is working when the display of the elevation data stays locked to the image of the terrain as the vehicle moves, and when the road-follower data correctly indicates road position.


This subsystem makes driving decisions. Most steering is done by the OpenSteer system, using input data from the sensor visualization subsystem. OpenSteer is a field-based steering control system - the vehicle is attracted to the next waypoint, attracted to the road location, and repelled by obstacles and untraversable ground.

OpenSteer can't handle dead ends. The "backseat driver" monitors the performance of OpenSteer, and when it's stuck, inserts subgoals to take the vehicle in a different direction. The "backseat driver" has a vicinity map of what the vehicle has seen recently, and access to a a path planner that can solve simple mazes.

Outputs from OpenSteer are monitored by low-level safety and sanity checks, which use sensor data (primarily the VORAD radar and the sonars) to veto actions from the steering system.

This system's performance can be monitored by watching the field map and the vicinity map.


  • The "certainty grid updater" area needs more detail.
  • There are ways to use field maps to control non-holonomic robots (where turning radius is limited), but we need to look into that problem further.
  • OpenSteer will probably require substantial modification.