Historical development of use of dynamical models for the representation of knowledge about real world processes in machine vision
Ernst D. Dickmanns
UniBw Munich. D-85577 Neubiberg. Germany
In their article  the
authors present a review on “Background and state of the art in perceptual
fusion” (Section 1.2) which gives a wrong picture of the historical
developments in dynamic machine vision. The purpose of this short communication is to at least partially
correct this; a more elaborate historical review is certainly desirable but, at
the moment, beyond what I can afford.
Recursive estimation has been the standard data flusion technique in state estimation for vehicle guidance in the engineering community since Kalman’s and Bucy’s famous papers in the early 60ies. In the field of machine vision, or image sequence processing as it was called around 1980 by the artificial intelligence community dominating this new discipline, the variants of Kalman filtering were also known, of course; however, one mostly tried to exploit the data smoothing aspects using some more or less arbitrary convenient ‘dynamical’ models, often only setting a derivative in image plane coordinates constant or even zero. This, of course, had little to do with the original intention of capturing the essential physical properties of processes happening in the real world in 3D space over time.
As far as I could see from the literature up to now, only two groups on the globe approached real-time machine vision in the early 80’s with ‘Kalman filtering’ being used in the original sense of exploiting statistical knowledge about the process observed and of imposing interpretation constraints over time as knowledge about real-world processes, thereby introducing time explicitly into image sequence processing: One was a group at NASA-JPL (see  for further references) and the other one was our group at UniBw Munich [2, 6, 11, 15-17, 19, 20]. I would appreciate any hints, should we have overlooked some other activities.
While the application in  seems to have been a singular point at JPL, we very early started to exploit the general power of this approach to dynamic machine vision and readily applied it to a variety of tasks. Initially, the deterministic version, also known as ‘Luenberger observer’, was applied by Meissner to the pole balancing problem by computer vision in 1979 to 1983 (published in English as ); then, Wuensche investigated several versions of observer and estimation algorithms . These results have not been published in English language; they led us to prefer the stochastic Kalman filter approach and derivatives thereof.
This new approach with sequential incorporation of measurement updates for handling varying measurement vector lengths as they naturally occur due to perturbations and during occlusions was then applied to vehicle docking [3, 20], road vehicle guidance [2, 5, II, 16. 19] and fully autonomous aircraft landing approaches to a rectangular runway . In  a review on ten years of development of this approach in connection with the different application areas has been given; this led to an invited keynote address to the IJCAI’89 in Detroit.
During the period
from 1986 till 1989 we had many controversial discussions with researchers from
the Al-community all over the world who initially were very much opposed to
this ‘numerical’ approach; it were the real-world, real-time applications we were
able to demonstrate with rather moderate computing power involved which finally
led to general acceptance in the vision community. Meanwhile, the very general background
of this method for dynamic scene understanding and the resulting vision system
architecture have been discussed extensively [7, 9]. Applications are being extended
to landmark navigation , moving humans recognition  and detailed 3D
shape recognition of cars during motion covering all normal aspect conditions
; the two last-mentioned applications will become real-time with the new generation
of microprocessors just around the corner.
In the aircraft landing approaches performed in 1991, data fusion from one vision sensor (CCD-camera) on a two-axis pan and tilt platform, a full range of inertial sensors and from aerodynamic speed measurements has been performed using prediction error feedback . It looks strange to me, that none of these publications of relevance to the field discussed by the authors has been mentioned; however, the method proposed is largely identical to the one we have developed over the last decade.
 J.L. Crowley and Y. Demazeau, ‘Principles and techniques for sensor data fusion”, Signal Processing, Vol. 32, Nos. 1-2. May 1993. pp. 5-27.
 E.D. Dickmanns, “4D-dynamic scene analysis with integral spatio-temporal models”, in: R. Bolles, ed., 4-th Internal. Symposium on Robotics Research, MIT-Press, Cambridge, MA, 1988. (Preprint published in 1987.)
 E.D. Dickmanns,”Object recognition and real-time relative state estimation under egomotion”, in: A.K. Jain, ed., Real-Time Object Measurement and Classification, Springer. Berlin, 1988. pp. 41-56. (Preprint published at NATO-ASI Maratea, fall 1987)
 E.D. Dickmanns, “Computer vision for flight vehicles”, Zeitschrift fur Flugwissenschaft and Weltraumforschung (ZFW), January 1988.
 E.D. Dickmanns, “An integrated approach to feature based dynamic vision”, Proc. Compnter Vision and Pattern Recognition (CVPR 88), Ann Arbor, June 1988.
 E.D. Dickmanns, Graefe: a) Dynamic monocular machine vision; b) Application of dynamic monocular machine vision. J. Machine Vision and Application. November 1988, pp. 223-261,
 E.D. Dickmanns, “General dynamic vision architecture for UGV and UAV”, J. Applied Intelligence, Vol. 2. Special Issue on ‘Unmanned Vehicle Systems’, 1992, pp. 251-270,
 E.D, Dickmanns, “Machine perception exploiting high-level spatio-temporal models”, AGARD Lecture Series 185 ‘Machine Perception’, Hampton, VA, USA; Munich; Madrid, September 1992, pp. 6∙1-6∙17.
 E.D. Dickmanns, “Expectation-based dynamic scene understanding”. in: A. Blake and A. Yuille, eds, Active Vision, MIT Press, Cambridge, MA, 1992, pp. 303-335.
 E.D. Dickmanns and R. Schell, “Autonomous landing of airplanes by dynamic machine vision”, Proc. IEEE Workshop on Applications of Computer Vision, Palm Springs, November/December 1992.
 E.D. Dickmanns and A. Zapp, “A curvature-based scheme for improving road vehicle guidance by computer vision, in: Mobile Robots, SPIE-Proc., Vol. 727, Cambridge, MA, October 1986, pp. 161-168.
 D.B. Gennery, “Tracking known three-dimensional objects”, Proc. AAAI, Pittsburgh, 1982, pp. 13-17,
 C. Hock and E.D. Dickmanns, “Intelligent navigation for autonomous robots using dynamic vision”, XVII-th Congress of the Internat. Soc. for Photogrammetry and Remote Sensing (!SPRS), Washington, DC, August 1992.
 W. Kinzel and E.D, Dickmanns, “Moving humans recognition using spatio-temporal models”, XVII-th Congress ISPRS, Washington. DC, August 1992.
 G. Meissner and E.D. Dickmanns, “Control of an unstable plant by computer vision”, in: T.S. Huang, ed., Image Sequence Processing and Dynamic Scene Analysis, Springer, Berlin, 1983. (Preprint for NATO-Advanced Study Institute published in October 1982, Braunlage.)
 B. Mysliwetz and E.D. Dickmanns, “A vision system with active gaze control for real-time interpretation of well structured dynamic scenes, in: L.O. Hertzberger, ed., Proc. Conf on Intelligent Autonomous Systems, Amsterdam, 1986.
 B. Mysliwetz and E.D. Dickmanns, “Distributed scene analysis for autonomous road vehicles”, Proc. SPIE Conf on Mobile Robots, Vol. 852, Cambridge, MA, 1987, pp. 72-79.
 J. Schick and E.D. Dickmanns, “Simultaneous estimation of 3D shape and motion of objects by computer vision”, Proc. IEEE-Second Workshop on Visual Motion, Princeton, October 1991.
 H.-J. Wuensche, Verbesserte Regelung eines dynamischen Systems durch Auswertung redundanter Sichtinformation unter Beruecksichtigung der Einfluesse verschiedener Zustandsschaetzer und Abtastzeiten, Report HSBwM / LRT / WE13a / IB / 83-2, 1983.
 H.-J. Wuensche, “Detection and control of mobile robot motion by real-time computer vision, in: N. Marquino, ed., Advances in Intelligent Robotics Systems, Proc. SPIE, Vol. 727, Cambridge MA, pp. 100-109.