Pellkofer et al. 2001b : For robust and secure behavior in natural environment an autonomous vehicle needs an elaborate vision sensor as main source of information. The vision sensor must be adaptable to the external situation, the mission, the capabilities of the vehicle and the knowledge about the external world accumulated up to the present time. In the EMS-Vision system, this vision sensor consists of four cameras with different focal lengths mounted on a highly dynamic pan-tilt camera head. Image processing, gaze control and behavior decision interact with each other in a closed loop. The image processing experts specify so-called regions of attention (RoAs) for each object in 3D object coordinates. These RoAs should be visible with a resolution as required by the measurement techniques applied. The behavior decision module specifies the relevance of obstacles like road segments, crossings or landmarks in the situation context. The gaze control unit takes all this information in order to plan, optimize and perform a sequence of smooth pursuits, interrupted by saccades. The sequence with the best information gain is performed. The information gain depends on the relevance of objects or object parts, the duration of smooth pursuit maneuvers, the quality of perception and the number of saccades. The functioning of the EMS-Vision system is demonstrated in a complex and scalable autonomous mission with the UBM test vehicle VaMoRs.