Mysliwetz and Dickmanns 1986 : A vision system for real-time image sequence processing is presented. Implemented as a multiprocessor system it consists mainly of industry standard single board computers. Its real-time local edge detection capability plus an integrated 2-axis camera pointing mechanism with controller make it a versatile vision tool for well structured dynamic scenes. Different processing hierarchies may be established entirely by software utilizing inter-processor message passing functions. An application to searching and tracking relevant patterns of a real road scene is described as the initial orientation phase of a vision-guided road vehicle.
This paper describes a vision system currently being used for a vision-guided road vehicle, but which potentially also covers a wide range of other robot vision applications such as object recognition in well structured scenes or moving object tracking.
Progress in robotics today is based mainly on improving a system’s sensing and perception capabilities. One important and emerging field of robotics heavily utilizing improved sensing and recently experiencing increased development efforts deals with autonomous mobile systems. The various imaging and ranging concepts include sonar, radar, laser, IR and TV imagery, with vision-based approaches appearing to dominate. Related projects can be found described in Refs.  through , in which mainly vision based guidance and obstacle avoidance are investigated, sometimes augmented by additional range sensors.
Our approach deals with two major aspects: First, it seems highly desirable for visually self-orienting systems as well as for many other robot vision applications to have a camera-pointing capability. This enables a machine to actively change its field of view, e.g. for searching its environment or for object tracking. Relatively few developments of camera-positioning devices for robotics are described in the literature, such as in  and , where two stereo vision “head-eye robots are presented. Second, facing the problem of real-time image analysis with limited processing resources, a flexible and dynamic attention focusing capability onto task-significant scene regions is necessary. This lends itself to distributed architectures, allowing efficient processor allocation to local operations. To some degree comparable developments, also applying multi-microprocessor architectures to image processing, can be found in  and , the latter machine however being of a much finer grained type. The system described here uses industry standard 16-bit single board computers connected by a global bus to form a loosely coupled, high level language programmable multiprocessor. Key element of its real-time processing capability is a distributed window processing concept plus a flexible task partitioning approach resulting in modest communication rates.
The underlying idea of this system, to process only task-significant regions of a scene, was originally proposed and developed by Dickmanns, Haas  and Graefe  in an earlier 8-bit based version, the BVV_1. Its basic feasibility was shown by Meissner  in an application to stabilize and control an inverted pendulum on a moving cart solely from extracted image data as measurements. The present version, besides using higher performance processors, has been functionally enhanced in that it has active view-direction control and is no more constrained to mere image preprocessing. Also provisions have been made to process two camera signals.
Image processing, interpretation and system coordination is distributed onto different, dedicated processors. Multiple picture processors can simultaneously access and process subregions (windows) of the digitized image in parallel, performing low level operations like local edge extraction in real-time. Extracted information is passed to higher, more general processing levels for interpretation. The interpreting module may then initiate camera movements to scan a wider field of view or to track an object.