Dickmanns et al. 1994b : The ‘4-D approach to dynamic machine vision’, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it has been developed initially, it is also used for data fusion from a range of sensors like two video cameras, a radio altimeter, inertial acceleration, rate and orientation, and aerodynamic velocity sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo-maintained in the interpretation process, from which the control applications required are being derived.
The validity and efficiency of the approach have been proven both in hardware-in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do 128 under perturbations from cross winds and wind gusts. The software package has been ported to ‘C’ and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.