Dickmanns 1986, Abstract: A brief review of the development of vehicle and road systems is given discussing the functions of vision in this context. It is analyzed what services may be provided by computer vision systems of the future. These functions are grouped into three areas: 1. “Internalization” of actual traffic regulation and navigation status, 2. monitoring functions and 3. partially or fully autonomous control. For each group several exemplary proposals are discussed, leaving the technical problems of image sequence processing aside and focusing on man-machine interface problems. The flexibility of the future knowledge based monitoring and automatic control system using active vision will require special emphasis on cognitive ergonomics / engineering to achieve acceptance and to make the full potential of the system accessible to the driver for selecting to parameters according to his preferred style of communication and driving.

1. Introduction

Today’s road traffic systems are based almost exclusively on the human visual system for guidance and control. Before the advent of the auto-mobile, the draught-animal also provided some guidance function by its eye / brain-system. Horses, for example, as relatively intelligent animals, learned to move a certain distance from the (right) road boundary and even memorized the road system around their home. So they were able to find home even if the driver had fallen asleep. At road junctions they could take oral commands and proceed in the right direction without further control action by the driver. These, of course limited, autonomous guidance and control capabilities have been traded for higher performance and endurance when switching to a technical propulsion system onboard the vehicle.
The development of microminiaturized electronic devices like CCD-TV cameras and digital computers will allow the introduction of technical vision systems onboard the vehicle, which can take over monitoring, guidance and control functions in order to increase safety and/or to reduce workload from the driver. An early attempt is described in [1]. However, the cooperation between the human driver, who always will have to carry the burden of ultimate responsibility for actions taken in a mixed mode, and the automatic system has to be carefully designed and tuned. That is why this topic should be discussed and investigated in the near future even though the introduction of autonomous guidance systems is one or two decades away. Since vision requires intelligence for understanding what is being seen, this problem may be imbedded in the more general paradigm of cognitive ergonomics dealing with human and artificial intelligence interfaces.