Wuensche 1986a, Abstract: Using a 2D-satellite model plant (air-cushion vehicle with reaction jet control), automatic rendezvous and docking maneuvers have been investigated with real image processing hardware in the loop. The docking partner is completely passive; its 3D shape is considered to be known and coded by a wire frame model in the knowledge base. Image processing is done in real-time on a custom made multi-microprocessor MIMO computer system. Object recognition is achieved by feature aggregation exploiting the laws of perspective projection. During the approach and circumnavigation maneuvers features are automatically tracked using dynamical models of the process for prediction, data filtering and control. A sequential Kalman filter formulation is used to accommodate the time varying length of the feature vector due to occlusion, failures and redirection of processing elements by the knowledge base. A method has been devised to always select that feature vector yielding the best state estimate. The system runs at 0.13 second cycle time (8 video frames per control cycle) and performs physical docking with a static rendezvous partner. Experimental results are given as time charts and video film sequences.
The developed visual guidance scheme combining dynamical models and perspective projection is considered to be a powerful and effective general method for motion control by computer vision.
Rendezvous maneuvers in space are an indispensable part of complex space missions performed with limited launcher capability or lasting over extended periods of time: Large structures have to be assembled in orbit, crews and equipment may be exchanged from stations, maintenance and refueling has to be performed or in orbit repair may become necessary.
The present state of technology for rendezvous and docking requires that either a human operator is in the control loop or that both partners take an active role providing sensory signals and / or attitude stabilization. Visual feedback control is being considered in some studies [1-4] requiring, however, special lighting arrangements onboard the docking partner, like a 3 point flashing light docking aid for more easy sensory data evaluation.
In the long run it is desirable to have a technology available which allows recognition of and docking with a completely passive partner, e.g. for in orbit repair. This requires the capability to determine the complete relative state vector of both vehicles during final approach even if the docking fixture cannot be seen from the initial position; up to there the approach over long distances may be guided by radar or ground based measurements. A high resolution imaging sensor, if necessary in connection with a beam light, is able to provide all information necessary for recognition of the relative position as it develops over time. The computer evaluating the image sequence has to have the appropriate knowledge about the process going on and how to interpret what he is seeing as relative 3D object motion.
Since 3D motion (6 degrees of freedom) may be rather complex and hare to simulate physically on Earth, a 2D model plant (3 d.o.f.) has been developed  which allows to study some basic aspects of vision-based rendezvous. This model plant is described in the next section. The approach taken in computer vision is then surveyed. It combines dynamical models of control theory with the perspective projection of some characteristic features of the objects involved to determine all relative motion parameters without having to invert the nonlinear projection equations as done in  or . The method is not confined to satellite rendezvous but constitutes a powerful general scheme for the integration of spatial and temporal aspects in real-time motion control by computer vision. It is equally applicable to submarine activities, industrial mobile robot positioning and land vehicle or surface ship guidance tasks, i.e. to any type of relative motion between objects. Its application to the bang- bang controlled air-cushion vehicle representing the satellite model is described in the main body of the paper.