Baten and Mueller 1995: In this paper, we present a new approach for a safe, autonomous navigation through rough terrain. The necessary capabilities to solve such a complex task are performed by concurrent processing modules which use different data sources. With respect to the concurrent processing structure of the whole system, the well known test vehicle VaMoRs is equipped with a hybrid computer architecture. A transputer cluster forms the hardware system kernel which is connected with a standard PC, a GPS receiver, an inertial stabilized pan and tilt camera platform, an inertial sensor block and three CCD cameras.
All vision processes take advantage of the 4D-approach that exploits knowledge about the dynamics of the processes, to estimate the state variables of the vehicle and its environment. For navigation, terrain maps of different level of detail are used. With this static a priori knowledge, two tasks are performed: determining the own position in the map and planning a mission to reach a user-defined goal position. The initial position can coarsely be derived from GPS data. It will be determined more exactly while the mission is running fusing sensor data from odometry, gyroscopes and vision.
Knowing about the own position, the navigation program can focus attention on expected landmarks and it selects the control mode of the vehicle. Several control modes are available, such as road following, convoy driving and turning-off. The latter will be described in more detail. A branch-off is searched for in the image. After being found, its form parameters are estimated using recursive filter techniques. The form parameters determine the clothoid path the vehicle shall follow. It is defined for a constant steering rate which is set by a feed-forward control program. Gaze is feed-forward controlled, too, in order to keep the region of interest in the image.