Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: image002

 


 

Headings and Preface

 

1. Time line “Road vehicles with the sense of vision”, UniBwM

(When interested in 3rd-generation ‘EMS-vision’ only [1997 – 2003], use link from 2. and look at previous page from there.)

2. Some remarks to the historical background for digital real-time vision

3. “First‘s in real-time vision with the test vehicle VaMoRs” 1986 to 2003

4. Achievements with test vehicle VaMP 1994 – 2004

5. Time line “Air vehicles with the sense of vision” 1982 – 1997

6. Time line “Sense of vision in Outer Space” 1982 - 1994

7. Awards received

References

 

Preface

In the year 2015 two books on the subject of “Autonomous Driving” (resp. ‘Assistance Systems’ with a corresponding chapter) have appeared that do not mention numerous historic facts and that even contain claims proven to be false [89 Buchbesprechung]. Dickmanns had begun in 2013 (following the ‘Bertha-Benz’-autonomous-drive and its discussions in the media) to assemble a review of the efforts of UniBw Munich in this field, in order to document for the general public the pioneering contributions that have been the basis of the early successes of Daimler [88]. Since these comprehensive documentations will not be read by persons on higher management levels, a very condensed summary has also been written, of which the actual review is the English version; it contains cross-references to relevant publications and to the dissertations at UniBwM that treat the original contributions to extensive details.

Readers interested in these may find systematic discussions of the methods developed and used in application to ground vehicles over two and a half decades by our group in the book [79 Content]. In the Internet the field of machines (vehicles and robots) using our method of ‘dynamic vision’ and the 4-D approach is treated on this website; it contains detailed graphic descriptions and several dozen video-clips documenting real-world experiments.

 

A survey-video may be activated here: ‘Vehicles Learn To See’ (15 min)

 

A summarizing survey talk on the successful 4-D approach and the results achieved over three decades may be found under YouTube (Google TechTalk) with several video clips [83].

 

1. Time line “Road vehicles with the sense of vision”, UniBwM

 

year

Main activity

1977

Formulation of the long-term research goal ‘machine vision’ at UniBw Munich / LRT; development of a concept of the ‘Hardware-In-the-Loop’ (HIL) simulation facility for machine vision. Start procurement of components for the new building. [1 pdf]

1978

  Development of the window-concept for real-time image sequence processing and understanding. Screening of the industrial market for realization → negativ. Convince colleague V. Graefe of participation: He created a custom-designed system based on industrial micro-processors, the 8-bit system BVV1 with Intel 8085.

  Selection of the experimental system ‘pole balancing’ (Stab-Wagen-System SWS) as initial application of computer vision based on edge features (see cover top left). The Luenberger observer has been chosen as method for processing measurement data from the imaging process by perspective projection [3, D1].

1979

− Concept of the Satellite-Model-Plant with an ‚air-cushion-vehicle‘ hovering on an horizontal plate (of ~ 2 x 3 m) as second step towards machine vision based on corner features (see cover top right). [1 pdf, 59]

− Realization of the “Hardware-in-the-Loop” (HIL-) simulation facility with

  three-axes motion simulator for instruments and cameras,

  calligraphic vision simulation including projection onto a cylindrical screen,

  hybrid computer for real-time simulation of multi-body motion.

H.4.1 HIL-simulation

 

1980

Definition of a work plan towards real time machine vision [1 pdf].

 

1982

− First results with real-world components ‘BVV1 and pole balancingpresented at external conferences: 1. Karlsruhe, and 2. NATO Advanced Study Institute (ASI), Braunlage [3, 12 Excerpts pdf , 13].

First dissertation treating visual road vehicle guidance with HIL-simulation [D1 Kurzfassg].

First external funding (BMFT Information – Technology) of two researchers in the field of ‘machine vision’. This allowed allocating money from the basic funds of the Institute to

− start the investigation of aircraft landing approach by machine vision [D2 Kurzfassung].

− Initiate ‘satellite model plant’ with air-cushion vehicle (cover right)[59, D3].

1984

Positive results in visual road vehicle guidance with HIL-simulation lead to purchasing a 5-ton van Mercedes D-508 to be equipped as test vehicle for autonomous mobility and computer vision ‘VaMoRs’ with: A 220 V electrical power generator, actuators for steering, throttle and brakes as well as a standard twin industrial 19’’ electronics rack.

 

Video Skidpan DBAG Stuttgart

Dez. 1986

 

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: H:Webüberarbeitung Febr.2017NeuImAufbauAb8.2.17Stand0217.1254mainReviewTabelle2017_02181.Timeline1977-86_0220.1933-Dateienimage020.jpg

1986

First publications of results in visual guidance of road vehicles at higher speeds using differential-geometry-curvature models for road representation [4], and for simulated satellite rendezvous and docking [59].

First presentation of computer vision in road vehicles, to an international audience from the fields of ‘automotive engineering’ and ‘human – machine interface’ [5 pdf].

− Demo of VaMoRs in skid-pan of Daimler-Benz AG (DBAG), Stuttgart: Vmax = 10 m/s, lateral guidance by vision; longitudinal control by lateral acceleration [D4 Kurzfassg; 79 Content, page 214].

 


[The following pages are shown properly with a width of > 17 cm]

Year of several breakthroughs (see also [88a])

in real-time vision for guidance of (road) vehicles

year

Main activity

Formulation of the 4-D approach with spatiotemporal world models: The objects to be perceived are those in the real outside world; sensor signals are used to install and improve an internal representation (model) in the perceiving agent (dubbed ‘subject’ here, maybe either a biologic or a robotic one). [4; 7 pdf] Video ‘Autonomous driving in rain’

Perception is achieved by feedback of prediction errors for objects in the vicinity hypothesized for the point ‘Here & Now’. Subjects generate the internal representations according to a fusion of both sensor signals received (as well as features derived therefrom) and generic (parameterized) models available in their knowledge base [7, 8 pdf, 9].

Hypotheses are accepted as perceptions of real-world objects (and of course other subjects) if the sum of all quadratic prediction errors over several video frames remains small. This approach allows dealing with the last video image only (no storage of images required!); the information from all previous images is captured in the best estimated state variables and parameters of the hypotheses accepted. This was a tremendous practical advantage in the 1980s [12 Excerpts pdf].

Successful tests in high-speed driving on a free stretch of Autobahn over more than 20 km at speeds up to the maximum speed of VaMoRs of 96 km/h (60 mph) have been achieved in the summer. [D4 Kurzfassung; 79 Content, page 216]

Video VaMoRs on Autobahn 1987, no obstacles

1987/88 first BMFT-project with Daimler-Benz AG (DBAG) starts:

Autonom Mobile Systeme’ (AMS). Video ‘Obstacle on dirt road, 1988’

Machine vision becomes part of the 7-year PROMETHEUS-project: Pro-Art [6, 10 pdf]. Many European automotive companies and universities (~ 60) join the project.

 

 

Time line 1988 till 1992 (First-generation vision systems BVV2)


 

 

year

Main activity

 

 

 

1988

/

1989

   First summarizing publication on dynamic machine vision [12 pdf exc.]

   Stopping in front of a stationary obstacle with VaMoRs / Spurbus (DBAG) from speeds up to 40 km/h; final demo of AMS-project [14 pdf]. Video ‘Rastatt-demo 1988’

   Development of the superimposed linear curvature models for roads with horizontal & vertical curvatures in hilly terrain [D5 Kurzfassg; 79, section 9.2, Content]; Video HorVertCurvatureResults

   (First internat. TV-report BBC-Tomorrow’s World: ‘Self drive van’ video with BBC

   Invited Keynote on “The 4-D approach to real-time vision” at the Int. Joint Conf. on Artificial Intelligence (IJCAI) in Detroit [13] (video IJCAI).

   Definition of term “subject” for general objects with senses & actuators [15 pdf].

   Concept for simultaneous estimation of shape and motion parameters of ground vehicles [17 pdf; D10 Kurzfassg]; derivation of control time histories for a car visually observed.

1990

   Development of a general architecture for machine vision [16; 18; 20 Abstract (plus Introd.)].

1991

   UniBwM equips the 2nd vehicle of DBAG capable of vision, dubbed ‘Vision Information Technology Application’ (VITA, later VITA1), with a replica of Prof. Graefe’s BVV2 and with the improved software systems for perception & control. Video ‘Stop behind stationary car’.

   Visual autonomous ‘Convoy-driving’ at the Prometheus midterm-demo in Torino, Italy, with ‘VITA’ (see left-hand side) at speeds 0 < km/h < 60 [16].

   (Limited) Handling of crossroads at night with headlights only.

   The higher management levels of automotive industry start accepting computer vision as valid goal in the Prometheus project.

 

 

1992

   Decision for the ‘Common European Demonstrators’ (CED) of DBAG for the final demo of Prometheus in 1994 to be large sedans capable of driving in standard French Autoroute traffic with guests on board. Switch to ‘Transputers’ as processors for image sequence understanding. Video VanCarTracking Transputers’

   Invited contribution to special issue of IEEE-PAMI: ‘Interpretation of 3-D scenes’ 1992): Recursive 3-D Road and Relative Ego-State Recognition [21 Abstr+introd].

   1992 to 95: Together with defense industry equip a cross-country vehicle GD 300 (see image at left) with the sense of vision for handling unsealed roads.

   Video ‘Obscured trucks’ and Video ‘Gaze stabilization’

 

Time line 1992 till 1997 (Second-generation vision systems: Transputers)

 

year

Main activity

1993

DBAG and UniBwM acquire a Mercedes 500 SEL each for transformation into the CED for 1994: DBAG performs the mechanical works needed (conventional sensors added, 24 V additional power supply etc.) for both vehicles, while UniBwM is in charge of the bifocal vision systems for the front and the rear hemisphere: CED 302 = VITA-2 (DBAG) with additional cameras looking sideways, 3 guests; CED 303 = VaMoRs-PKW (short VaMP of UniBwM), 1 guest, for easier testing.

1994

Final demo PROMETHEUS & Int. Symp. on Intelligent Vehicles in Oct. Paris [23 a) pdf, 23b) Abstract]; performance shown by machine vision (without Radar and Lidar) in public three-lane traffic on Autoroute-1 near Paris with guests on board:

·    Free lane running up to the maximally allowed speed in France of 130 km/h.

·    Transition to convoy-driving behind an arbitrary vehicle in front with distance depending on speed driven;

·    Detection and tracking of up to six vehicles by bifocal vision both in the front and the rear hemisphere in the own lane and the directly neighboring ones.

·    Autonomous decision for lane change and autonomous realization, after the safety driver has agreed to the maneuver by setting the blink light. Video ‘Twofold lane change’

In total more than 1000 km have been driven by the two vehicles without accident on the three-lane Autoroutes around Paris.

1995

Our toughest international competitor CMU (under T. Kanade) had performed a long-distance demonstration in USA driving from the East- to the West Coast with lateral control performed autonomously during the summer of 1995 (see figure above) over multiple intervals, receiving much publicity.

Since in Europe the next generation of transputers failed to appear, UniBwM with DBAG switched from transputers to Motorola Power-PC with more than 10-fold computing performance; this allowed reducing the number of processors in their vision systems by a factor of five, while doubling evaluation rate to 25 Hz.

In response to the US-demonstration (above), VaMP (UniBwM) performed a long-distance test drive from Munich to a project meeting in Odense (Denmark) with many fully autonomous sections (both lateral and longitudinal guidance with binocular black-and-white vision relying on flexible edge extraction). About 95% of the distance tried has been driven autonomously, yielding > 1600 km in total. [D18 Kurzfassung]

The goal was finding those regions and components of the task promising best progress with the 3rd-generation vision system to be started next; in the northern plains, Vmax,auton ~ 180 km/h has been achieved [79, Section 9.4.2.5 Content].

Conclusions drawn: 1. Increase look-ahead range to about 200 m for lane recognition; 2. Introduce at least one color camera for differentiating between white and yellow lane markings at construction sites and for reading traffic signs. 3. Add a) region-based features for better object recognition, and b) precise gaze control for object tracking and inertial gaze stabilization. 4. Recognition of crossroads without in-advance-knowledge of parameters, determination of the angle of intersection and road width. Autonomous turn-offs (VaMoRs, in the defense realm) [24 pdf]. For details see: [88b pdf]

Video

Active Gaze Control

1996

Decision for ending cooperation with DBAG; development of longer-term concept for a high-level dynamic vision system: “Expectation-based, Multi-focal, Saccadic” (EMS-) Vision in a joint German – US American cooperation.

(Example in video left: Saccadic perception of a traffic sign while passing)

 

 

Time line 1997 till 2004 (Third-generation vision systems: COTS)

 

year

Main activity

1997

Development of 3rd-generation dynamic vision system with Commercial-Off-The-Shelf (COTS-) components: EMS-vision with all capabilities (developed at UniBwM separately over the last two decades) integrated together with first ‘real-time/full-frame stereo vision’ to be contributed by one American partner (former Sarnoff Research Center, SRI and Pyramid Vision Technology (PVT) Princeton, NJ) [25, 26]. The goal was flexible high-level vision without the need for much in-advance information (dubbed here: Scout-type vision). Video ‘Turn-off taxiway Nbb. On the higher system levels, two approaches were pursued in parallel:

1.      On the American side the so-called 4D/RCS-approach of Jim Albus, National Inst. of Standards and Technology (NIST), Gainsborough MD [27, 28] was used; system integration went up to the level of military command & control, implemented both in the HMMWV of NIST and in the test vehicles of the US-industrial partner Gen. Dynamics, the eXperimental Unmanned Vehicle (XUV).

2.      On the German side, EMS-vision with specific capabilities for a) perception, b) generation of object/subject hypotheses, c) situation assessment and decision making, and d) control computation and output. It has been implemented on a COTS-system with 4 x Dual Pentium_4 together with ‘Scalable Coherent Interface’ and 2 blocks of transputers as interface to real-time gaze- and vehicle control. After development of the system with VaMoRs it has been transferred by the German industrial partner Dornier GmbH to the tracked test vehicle Wiesel_2 (see left-hand side).

Video Wiesel2 on dirt road

1998

 

till

 

2000

Transformation of separate capabilities into the EMS-framework and integration of “Expectation-based, Multi-focal, Saccadic” (EMS) vision. Video ‘Ditch detection’

Simplified version in VaMP for bifocal “Hybrid Adaptive Cruise Control” [D30 Kurzfassung; 33e) pdf]

A more detailed

elaboration of this page may

be found under

link [88c) pdf].

 

See below for

details in

Mission performance

2001

Final demo with VaMoRs in Germany 2001 showed ‘Mission performance’ with 10 mission elements on test track Neubiberg (see below) [29 – 46, D25 – D30; KurzfassgKomponenten, KurzfassgFähigkeiten, KurzfassgFahrbahnerkennung, KurzfassgVerhaltensentscheidung]. The separate Sarnoff-PVT stereo system in 2001 had a volume of about 30 liter.

Video ‘EMS-turn-off, saccades’; Video ‘Passing a crossing, EMS-vision’.

USA: Congress defines as goal for 2015: 1/3 of ground vehicles for battling shall be capable of driving autonomously. {This triggered the Grand- und Urban Challenges of DARPA for the years 2004 to 2007.}

Video ‘Autonomous mission performance on-off-road with ditch avoidance’.“Scout-type” vision on network of minor roads: Video ‘Estimation of crossroad parameters’ (scout-type vision).

2003


 

2. Some remarks to the historical background for digital real-time vision

 

The following figure gives background information on the state of development of digital microprocessor hardware towards the end of last century. Before 1980, mobile digital real-time vision systems were simply impossible. Our custom-designed “window-system BVV 1” working on half-frames (video fields) at ~ 10 Hz (360 x 120 pixels) in 1981 had a clock rate of ~ 5 MHz. The 2nd-generation systems based on ‘transputers’ in the first half of the 1990s with 4 direct links to neighboring processors worked with around 20 MHz clock rate. Towards the end of the 1990s our 3rd-generation system based on Pentium-4, dubbed “Expectation-based, Multi-focal, Saccadic” (or in short EMS-) vision system approached a clock rate of 1 GHz (see center of the following figure within the green rectangle).

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Historical background of vision systems UBM 1980-2004

In the same time span of two decades the transistor count per microprocessor increased by a factor of ~ 1500 (center right in figure). The feature size on chips decreased by about two orders of magnitude (see lower right corner). This has led to the fact that the physical size of vision systems (characterized by the four bullets in the upper left corner) could be shrunk to fit into even small passenger cars.

Our vision systems did not need precise localization by other means or by highly precise maps. Due to perception of the environment by differential-geometry-models they were able to recognize objects in the environment sufficiently precise by feedback of prediction errors (dubbed “scout-type vision” here, see green angle in upper center of the figure). A new type of vision system has developed with the DARPA-Challenges 2002 till 2007 ( see brown angle in upper right corner of the figure): Each Challenge consisted of performing maintenance missions in well explored and prepared environments. The main sensor used was a revolving laser-range finder on top of the vehicles that provided (non-synchronized) distance images for the near range (~ 60 m) ten times per second. After synchronization anderror correction nice 3-D displays could be generated. For easy interpretation both highly precise maps with stationary objects and precise localization of the own position by GPS-sensors (supported by inertial and odometer signals) were required. Some groups have demonstrated mission performance without any video signal interpretation. Because of the heavy dependence on previously collected information on the environment, this approach is dubbed “confirmation-type vision” here (brown angle). The Google systems developed later and several other approaches world-wide followed this bandwagon track over more than a decade, though a need for more conventional vision with cameras has soon been appreciated and is on its way. Finally, computing power and sensors available in the not too far future will allow a merger of both approaches for flexible and efficient use in various environments.On rough ground and for special missions (e.g. exploratory, cross-country ralleys, military) active gaze control with a central field of view with high-resolution may become necessary. Vision systems allowing the complexity required have been discussed in [79 content chapter 12; 84; 85 pdf; 88c) pdf]

 

At UniBw Munich towards the end of the 1990’s a new branch of advanced studies beside Aero-Space Engineering (Luft- und Raumfahrttechnik, LRT) has been proposed in the framework of increasing the number of options for students in the second half of their studies: “Technology of Autonomous Systems” (Technik Autonomer Systeme, TAS), see page 123 of [63 Cover and Inhalt].A specific institute with the identical name has been founded early in the first decade of the new century; the corresponding head of these activities has been called in 2006: The former PhD-student Hans-Joachim Wuensche [D3 Kurzfassung] who had gained long industrial experience on all levels in the meantime.

He has built up the institute TAS and continues top level research in autonomous driving for

ground vehicles. Details may be found on the Website

 

http://www.unibw.de/lrt8/forschung

 


3. “First‘s in real-time vision with test vehicle VaMoRs” 1986 to 2003

 

 

Phase a) BVV2 (based on Intel 80x86, 5 MHz clock rate); cycle time: 4 video half-frames.

1986

First fully autonomous runs (lateral & longitudinal guidance by vision) video ‘VaMoRs on campus road’; Demo of these capabilities in skidpan of Daimler-Benz AG (DBAG) Stuttgart: (Vmax = 10 m/s = 36 km/h). [D4 Kurzfassung; 79, Section 7.3.3] video ‘Skidpan Daimler Stuttgart’

1987

Autonomous runs on free Autobahn: up to V = 96 km/h, [D4 Kurzfassung]

1988

Driving on roads of low order by night & rainfall; detection and stopping in front of large stationary obstacle from 40 km/h [D4, D5 Kurzfassg; 9, 14]

1989

Recognition of horizontal & vertical road curvature [D5 Kurzfassg, 16]

video HorVertCurvatureResults

1990

Driving on unsealed roads at speeds up to 50 km/h [D5; D12 Kurzfassung];

turning off onto previously unknown cross roads [D19 Kurzfassung]

1992

Certification for driving fully autonomous in general public traffic on all types of roads (with at least three persons on board)

 

Phase b) Transition to ‚Transputers‘: 20 MHz clock rate, cycle time 80 ms,

1993

Duplicate on transputer-system the capabilities developed. video 1992 VanCarTracking LaneChange Transputers’

1993

Gaze stabilization during brake maneuver [D16 Kurzfassg; D17 Kurzfassg]

video BrakingGazeStabilization

1994

Bifocal saccadic perception of a traffic sign at 50 km/h [D16 Kurzfassg]

video GazeControlTrafficSign_Saccade

 

Phase c) New Dual-Pentium_4 in COTS-system; clock rate around 700 MHz

1997

(trinocular/divergent) Stereo-range-estimation in near range [D24 Kurzfassung]

2000

Expectation-based, Multi-focal, Saccadic (EMS-) vision achieves first real-time demonstrations with flexibly integrated capabilities for performing complex missions [32 ext.abstract; 33a) pdf; to 33e), D29 Kurzfassung]. Video TurnLeftSacc DirtRoad

2001

Final demo of joint German-US-American project „AutoNav“ (flexible intelligent capabilities): Mission performance on robotic test site of UniBwM in Neubiberg; ‚On-& off-road driving‘ including avoidance of both positive and negative (lower than the driving surface) obstacles. Real-time / full-frame stereo vision by special processor system from SRI / PVT (Princeton) [34 to 46; D25 Kurzfassg; D26 Kurzfassg; D27 Kurzfassg; D28 Kurzfassg; D29 Kurzfassg; 79 Content , Chapter 14]. video ‘Mission performance on-off-road’

2003

Same mission with stereo-vision on a single European-standard plug-in

board into one of the PC

 

 


4. Achievements with test vehicle VaMP 1994 – 2004

Topic

year

Text and links

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage036.jpg

1994

Test vehicle of ‘Prometheus’-project (SEL 500) [23a pdf; 23b Abstract]: 2nd-generation transputer vision system with up to 60 processors; [DDInf Zus.fassg; D17 Kurzfassg, D18 Kurzfassg, D23 Kurzfassg; D28 Kurzfassg, D30 Kurzfassg; 88b pdf]; bifocal vision both to front and rear; autonomous lane change decisions on two-lane roads. See M.1 Visual perception

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsWeb2017VaMP perception 1994.jpg

1994

Convoy-driving on multi-lane highways [D12 Kurzfassg]: Monocular motion stereo with the assumption of flat ground [23a pdf to 23d]. Width of vehicles initially estimated (parameter adaptation, up to five vehicles in each hemisphere),then fixed for improved range estimation [D17 Kurzfassg]. Autoroute A1 in France: video ‘Twofold lane change’

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage043.png

1995

Long distance drive Munich−Odense [D18 Kurzfassg, D23]: Transputer system with Power-PC 601 processors for feature extraction (front hemisphere only) [D17 Kurzfassg]. Collect statistical data on performance of edge-based vision system to derive improvements required for the 3rd-generation system based on COTS-components.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage037.jpg

1995

High-speed road running (max 175 km/h) [D18 Kurzfassg]; Curvature-based perception of own & neighboring lanes; lateral ego-state including transition between lanes [D30 Kurzfassg].Large look-ahead ranges up to ~ 100 m.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage046.png

1998

 

to

 

2000

Horizontal and vertical road curvature

At larger ranges, even small vertical curvatures are important for higher precision in monocular range estimation; EMS-vision [D30 Kurzfassg]. Yellow bars display curvature effects relative to estimated position of the horizon. Video ‘2000 Hybrid Adaptive Cruise Control’

Bifocal vision complementing an industrial radar system for more robust distance keeping, EMS-vision system [D30 Kurzfassg].Longitudinal control fully autonomous, lateral control by human driver to keep him ‘in the loop’ for better personal attention.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage047.jpg

2002

Wheel recognition (oblique views from rear) [D30 Kurzfassg].Most characteristic item for separating vehicles from other large box-like objects; yields improved estimates of the relative heading angle of vehicles. Video Wheel recognition’; this allows better situation assessment for lane changes.

 


5. Time line “Air vehicles with the sense of vision” 1982 – 1997

Pictorial symbolization

year

Text on Topic

1982

 

till

 

1987

G. Eberl: Derivation of basic modeling for motion in 3 dimensions with homogeneous coordinates (4 x 4 matrices based on quaternions); representation of landing strip as planar rectangle, and of ego-motion.

Edge extraction in the image and estimation of ego-state (3-D trajectory over time) relative to the landing strip. Scene interpretation by feedback of prediction errors: The validity of the approach has been proven for the special case without perturbations by stronger winds and gusts.

Dissertation Eberl 1987 [D2 Kurzfassg; 47; 48 pdf].

Video AircraftLandingApproach

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: H:Webüberarbeitung Febr.2017SchellLanfl1991.jpg

1987

 

till

 

1992

R. Schell: Inclusion of inertial sensors: Measuring perturbations from gusts with almost no delay time redundantly by 4 rate gyros and 4 accelerometers; handling of the different delay times in the optical and the inertial data paths. With this approach the validity of the method was proven also for strong perturbations in the HIL-simulation loop. This has led to DFG-funded fully autonomous test flights with BVV_2 onboard a Do-128 plane of TU-Brunswick at their airport (see left). These were performed till before touch-down; then ‚go-around‘ for the next test was done by the safety pilot [49 to 52]; see result in latter part of

video AircraftLandingApproach.

Dissertation Schell 1992 [D7 Kurzfassg], [50 pdf]

1992

 

till

 

1997

Transputer-system for image feature extraction; recognition of large obstacles on the landing strip during landing approach [53; 54]. S. Werner: Landmark navigation of helicopter close to the ground [55, 56, 57 pdf]; realistic HIL-simulation of the vicinity around the airport Brunswick (several km in extension). Mission with different trajectory elements; for the perception part see [DDInf Zusammenfassung]

Dissertation Werner 1997 [D22 Kurzfassg]

Video HelicopterMissionHIL-SimBs

 

See also: HelicoptersWithSenseOfVision

 


6. Time line “Sense of vision in Outer Space” 1982 - 1994

Pictorial symbolization

year

Text on topic

1982

 

 

till

 

 

1987

H.-J. Wünsche: Concept of model plant for „Satellite-Docking“ with air cushion vehicle (laboratory experiment for aero-space-technology). Two pairs of air jets for reaction control of planar motion. Perturbation-free air supply from the top. Use as subject for dissertation „Docking by machine vision“; extraction of corner features for recognizing docking partners (white) and the relative pose with BVV1 (8-bit microprocessors) plus PDP-computer. Sequence of maneuvers: 1. Approach, 2. find docking direction by circular go-around, 3. rendezvous with mechanical lock-in [58; 59; 60].

Dissertation Wünsche 1987

[D3 Kurzfassg] See video ‚Air cushion vehicle’

1989

 

 

till

 

 

1993

 

Project of the Deutsches Zentrum fuer Luft- und Raumfahrttechnik (Robotik im Weltraum, DLR-Oberpfaffenhofen) Prof. G. Hirzinger: Initial subcontract for the development of software for compensating the time delays in the tele-operation-loop „Robot-arm and cameras on board in orbit, and image evaluation in ground station on Earth“ (~ 6 to 7 sec. time lag).

 

The new transputer system with the 4-D approach allowed an alternative to the one chosen by DLR [DDInf Zusammenfassung]. After successful tests, the transputer-system got the first chance on May 2nd 1993 in the D2-Mission with Space-Shuttle Columbia. It became a full success (world-wide first) [61 Abstract].

Dissertation Fagerer 1996 [D20 Kurzfassg].

see video ROTEX

 

 


7. Awards received

 

1996 ‘Olympus’-award of the DAGM (Deutsche Arbeitsgemeinschaft für Mustererkennung) in (Heidelberg)

1997 Phillip Morris Forschungspreis for the development of the ‘4-D approach to dynamic machine vision and the applications demonstrated’ (Munich).

1998 Digi-Globe for contributions to ‘Vehicles capable of vision’, (Munich)

2006 Hella Engineering Award’ for ‘Contributions to road vehicles capable of vision’, (Lippstadt)

2016 Lifetime Achievement Award of IEEE-ITSS (Intelligent Transportation Systems Society) for ‘Pioneering Work in the Field of Autonomous Vehicles’, in Rio de Janeiro at ITS-Conference-2016, Nov. (jpg)

2017 Eduard Rhein Technologiepreis 2017‘ von der EDUARD-RHEIN-STIFTUNG

References

Dissertations mentored

D1 Meissner H.G. 1982: Steuerg dynamischer Systeme aufgrund bildhafter Informationen. Kurzfassung

D2 Eberl G. 1987: Automatischer Landeanflug durch Rechnersehen. Kurzfassung

D3 Wünsche H.-J. 1987: Erfassung und Steuerung von Bewegungen durch Rechnersehen. Kurzfassung.

D4 Zapp A. 1988: Automatische Straßenfahrzeugführung durch Rechnersehen. Kurzfassung

D5 Mysliwetz B. 1990: Parallelrechnerbasierte Bildfolgeninterpretation zur autonomen Fahrzeugführung. Kurzfassung

D6 Otto K.-D. 1990: Linear-quadratischer Entwurf mit Strukturvorgabe. Kurzfassung

D7 Schell F.-R. 1992: Bordautonomer automatischer Landeanflug aufgrund bildhafter und inertialer Meßdatenauswertung. Kurzfassung

D8 Uhrmeister B. 1992: Verbesserung der Lenkung eines Luft-Luft-Flugkörpers durch einen abbildenden Sensor. Kurzfassung

D10 Schick J. 1992: Gleichzeitige Erkenng von Form und Bewegung durch Rechnersehen. Kurzfassung

D11 Hock Ch. 1994: Wissensbasierte Fahrzeugführg mit Landmarken für autonome Roboter. Kurzfassung

D12 Brüdigam C. 1994: Intelligente Fahrmanöver sehender autonomer Fahrzeuge in autobahn-ähnlicher Umgebung. Kurzfassung

D13 Schmid M. 1994: 3-D-Erkenng von Fahrzeugen in Echtzeit aus monokularen Bildfolgen. Kurzfassung

D14 Kinzel W. 1994: Präattentive und attentive Bildverarbeitungsschritte zur visuellen Erkennung von Fußgängern. Kurzfassung

D15 Baader A. 1995: Ein Umwelterfassungssystem für multisensorielle Montageroboter.

D16 Schiehlen J. 1995: Kameraplattformen für aktiv sehende Fahrzeuge. Kurzfassung

D17 Thomanek F. 1996: Visuelle Erkennung und Zustandsschätzung von mehreren Straßenfahrzeugen zur autonomen Fahrzeugführung. Kurzfassung

D18 Behringer R. 1996: Visuelle Erkennung und Interpretation des Fahrspurverlaufes durch Rechnersehn für ein autonomes Straßenfahrzeug. Kurzfassung

D19 Müller N. 1996: Autonomes Manövrieren und Navigieren mit einem sehenden Fahrzeug. Kurzfassung

D20 Fagerer Ch. 1996: Automatische Teleoperation eines Tracking- und Greifvorgangs im Weltraum, basierend auf Bilddatenauswertung. Kurzfassung

D21 Schubert A. 1996: Synthese diskreter Zustandsregler durch eine Verbindung direkter und indirekter Methoden. Kurzfassung

D22 Werner S. 1997: Maschinelle Wahrnehmung für den bordautonomen automatischen Hubschrauberflug. Kurzfassung

DDInf Dickmanns Dirk 1997: Rahmensystem für die visuelle Wahrnehmung veränderlicher Szenen durch Computer. Diss. UniBw München, Fak. Informatik Zusammenfassung

D23 Maurer M. 2000: Flexible Automatisierung von Straßenfahrzeugen mit Rechnersehen. Kurzfassung

D24 Rieder A. 2000: Fahrzeuge sehen – Multisensorielle Fahrzeugerkennung in einem verteilten Rechnersystem für autonome Fahrzeuge. Kurzfassung

D25 Lützeler M. 2002: Fahrbahnerkennung zum Manövrieren auf Wegenetzen mit aktivem Sehen. Kurzfassung

D26 Gregor R. 2002: Fähigkeiten zur Missionsdurchführung und Landmarkennavigation. Kurzfassung

D27 Pellkofer M. 2003: Verhaltensentscheidung für autonome Fahrzeuge mit Blickrichtungssteuerung. Kurzfassung

D28 Von Holt V. 2004: Integrale Multisensorielle Fahrumgebungserfassung nach dem 4-D Ansatz. Kurzfassung

D29 Siedersberger K.H. 2004: Komponenten zur automatischen Fahrzeugführung in sehenden (semi-) autonomen Fahrzeugen. Kurzfassung

D30 Hofmann U. 2004: Zur visuellen Umfeldwahrnehmung autonomer Fahrzeuge. Kurzfassung

 

Ground vehicles

[1] Dickmanns E.D. 1980: Untersuchung und mögliche Arbeitsschritte zum Thema „Künstliche Intelligenz: Rechnersehen und –steuerung dynamischer Systeme“. HsBwM / LRT / WE 13 / IB 80-1 (18 S.) textpdf

[2] Meissner H.G. 1982: Steuerung dynamischer Systeme aufgrund bildhafter Informationen. Kurzfassung

[3] Meissner HG; Dickmanns ED 1983: Control of an Unstable Plant by Computer Vision. In T.S. Huang (ed): Image Sequence Processing and Dynamic Scene Analysis. Springer, Berlin, pp 532-548

[4] Dickmanns E.D., Zapp A. 1986: A Curvature-based Scheme for Improving Road Vehicle Guidance by Computer Vision. In: 'Mobile Robots', SPIE Proc. Vol. 727, Cambridge, MA, pp 161-168

[5] Dickmanns E.D. 1986: Computer Vision in Road Vehicles – Chances and Problems. ICTS-Symp. on "Human Factors Technology for Next-Generation Transportation Vehicles", Amalfi, Italy. Abstract, pdf

[6] 1987a: State of the art review for ‘PRO-ART, 11 170 Integrated approaches’. UniBw M/LRT/WE 13/IB/87-2 (limited distribution)

[7] 1987b: 4-D-Dynamic Scene Analysis with Integral Spatio-Temporal Models. 4th Int. Symp. on Robotics Research, Santa Cruz. In: Bolles RC; Roth B 1988. Robotics Research, MIT Press, Cambridge, pp 311-318. pdf

[8] , Zapp A.1987c: Autonomous High Speed Road Vehicle Guidance by Computer Vision. 10th IFAC World Congress Munich, Preprint Vol. 4, 1987, pp 232-237. pdf

[9] 1987d:Object Recognition and Real-Time Relative State Estimation Under Egomotion. NATO Advanced Study Institute, Maratea, Italy (Aug./Sept) In: A.K. Jain (ed): Real-Time Object Measurement and Classification. Springer-Verlag, Berlin, 1988, pp 41-56 Abstract

[10] Dickmanns E.D.; Graefe V.; Niegel W. 1987: Abschlussbericht Definitionsphase PROMETHEUS Pro-Art, pp. 1-18. pdf

[11] Roland A., Shiman P. 2002: Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993. MIT Press

[12] Dickmanns E.D.; Graefe V. 1988: a) Dynamic monocular machine vision. Machine Vision and Applications, Springer International, Vol. 1, pp 223-240. b) Applications of dynamic monocular machine vision. (ibid), pp 241-261 Abstract , Excerpts pdf

[13] Dickmanns E.D. 1989. Detroit, Aug. 23. Invited keynote talk, IJCAI on ‚The 4-D approach to real-time vision’ (11:10 – 12:40 a.m.); including several videos with experimental results.

[14] Dickmanns E.D., Christians T. 1989: Relative 3-D-state Estimation for Autonomous Visual Guidance of Road Vehicles. In T. Kanade et al (eds): 'Intelligent Autonomous Systems 2', Amsterdam, Dec. Vol. 2, pp 683-693; also appeared in: Robotics and Autonomous Systems 7 (1991), Elsevier Science Publ., pp 113-123 pdf

[15] Dickmanns E.D. 1989: Subject-Object Discrimination in 4-D Dynamic Scene Interpretation by Machine Vision. Proc. IEEE-Workshop on Visual Motion, Newport Beach, pp 298-304. pdf

[16] Dickmanns E.D.; Mysliwetz B.; Christians T. 1990: Spatio-Temporal Guidance of Autonomous Vehicles by Computer Vision. IEEE-Transactions on Systems, Man and Cybernetics, Vol. 20, No. 6, Special Issue on Unmanned Vehicles and Intelligent Robotic Systems, pp 1273−1284.

[17] Schick J.; Dickmanns E.D. 1991: Simultaneous Estimation of 3-D Shape and Motion of Objects by Computer Vision. IEEE Workshop on Visual Motion, Princeton, N.J., pdf

[18] Dickmanns E.D.1988a: An Integrated Approach to Feature Based Dynamic Vision. Int. Conference on Vision and Pattern Recognition (CVPR), Ann Arbor, 1988, pp 820-825.

[19] 1991: Dynamic vision for locomotion control – An evolutionary path to intelligence. CCG-lecture. pdf

[20] Dickmanns E.D.1992: A General Dynamic Vision Architecture for UGV and UAV. Journal of Applied Intelligence 2, pp. 251-270 Abstract (plus Introd.)

[21] Dickmanns E.D.; Mysliwetz B. 1992: Recursive 3-D Road and Relative Ego-State Recognition. IEEE-Transactions PAMI, Vol. 14, No. 2, Special Issue on 'Interpretation of 3-D Scenes', Feb. pp 199-213 Abstract (plus Introd.)

[22] Dickmanns E.D.1993: Expectation-Based Dynamic Scene Understanding. In A. Blake and A. Yuille (eds): 'Active Vision', MIT Press, Cambridge, Mass., 1993, pp. 303-335

[23] Four contributions in Masaki (ed) 1994: Proc. of Int. Symp. on Intelligent Vehicles '94, Paris, Oct.

a) Dickmanns E.D.; Behringer R.; Dickmanns D.; Hildebrandt T.; Maurer M.; Thomanek F.; Schiehlen J.: The Seeing Passenger Car 'VaMoRs-P'. pp 68-73 Abstract , pdf

b) Thomanek F.; Dickmanns E.D.; Dickmanns D.: Multiple Object Recognition and Scene Interpretation for Autonomous Road Vehicle Guidance. pp. 231-236 Abstract

c) Von Holt: Tracking and Classification of Overtaking Vehicles on Autobahnen. pp 314-319

d) Schiehlen J.; Dickmanns E.D.: A Camera Platform for Intelligent Vehicles. pp 393-398

[24] Dickmanns E.D.; Müller N. 1995: Scene Recognition and Navigation Capabilities for Lane Changes and Turns in Vision-Based Vehicle Guidance. Control Engineering Practice, 2nd IFAC Conf. on Intelligent Autonomous Vehicles-95, Helsinki. pdf

[25] Mandelbaum R., Hansen M., Burt P., Baten S. 1998: Vision for Autonomous Mobility: Image Processing on the VFE-200. In: IEEE International Symposium on ISIC, CIRA and ISAS

[26] Baten S.; Lützeler M.; Dickmanns E.D.; Mandelbaum R.; Burt P. 1998: Techniques for Autonomous Off-Road Navigation. IEEE Intelligent Systems, Vol. 13, No. 6, pp 57-65

[27] Albus J.S., 2000: 4-D/RCS reference model architecture for unmanned ground vehicles. Proc. of the International Conference on Robotics and Automation, San Francisco, April 24-27

[28] Albus J.S., Meystel A.M. 2001: Engineering of Mind. – An Introduction to the Science of Intelligent Systems. Wiley Series on Intelligent Systems

[29] Dickmanns, E.D. 1997: Vehicles Capable of Dynamic Vision. Proc. 15th International Joint Conference on Artificial Intelligence (IJCAI-97), Vol. 2, Nagoya, Japan, pp 1577-1592 Abstract (with Introduction)

[30] Dickmanns E.D.1998: Expectation-based, Multi-focal, Saccadic Vision for Perceiving Dynamic Scenes (EMS-Vision). In C. Freska (ed.): Proc. in Artificial Intelligence, Vol. 8, pp 47-54

[31] Dickmanns E.D., Wuensche H.-J. 1999: Dynamic Vision for Perception and Control of Motion. In: B. Jaehne, H. Haußenecker and P. Geißler (eds.) Handbook of Computer Vision and Applications, Vol. 3, Academic Press, pp 569-620 Content (and Introd.)

[32] Dickmanns E.D. 1999: An Expectation-based, Multi-focal, Saccadic (EMS) Vision System for Vehicle Guidance. In Hollerbach and Koditschek (eds.): ‚Robotics Research‘ (The Ninth Symposium), Springer-Verlag, Extended_abstract

[33] Five contributions to EMS-Vision in the Proceedings of the Internat. Symposium on Intelligent Vehicles (IV’2000), Dearborn, (MI, USA), Oct. 4-5:

a) Gregor, R., Lützeler, M., Pellkofer, M., Siedersberger, K.H. and Dickmanns, E.D.: EMS-Vision: A Perceptual System for Autonomous Vehicles. pp 52-57 pdf

b) Pellkofer, M., Dickmanns, E.D.: EMS-Vision: Gaze Control in Autonomous Vehicles. pp 296-301 pdf

c) Lützeler, M. und Dickmanns, E.D.: EMS-Vision: Recognition of Intersections on Unmarked Road Networks. pp 302-307 pdf

d) Gregor, R., Dickmanns, E.D.: EMS-Vision: Mission Performance on Road Networks. pp 468-473; pdf

e) Hofmann, U.; Rieder, A., Dickmanns, E.D.: EMS-Vision: Application to Hybrid Adaptive Cruise Control. pp 468-473 pdf

f) Siedersberger K.-H., Dickmanns E.D: EMS-Vision: Enhanced Abilities for Locomotion. pdf

[34] Gregor R., Lützeler M., Dickmanns E.D. 2001: EMS-Vision: Combining on- and off-road driving. Proc. SPIE Conf. on Unmanned Ground Vehicle Technology III, AeroSense ‘01, Orlando (FL), April 16-17, pp. 329-440 Abstract (and part of Introd.)

[35] Gregor R., Lützeler M., Pellkofer M., Siedersberger K.-H., Dickmanns E.D. 2001: A Vision System for Autonomous Ground Vehicles with a Wide Range of Maneuvering Capabilities. Proc. ICVS, Vancouver, July

[36] Siedersberger K.-H.; Pellkofer M., Lützeler M., Dickmanns E.D., Rieder A., Mandelbaum R., Bogoni I. 2001: Combining EMS-Vision and Horopter Stereo for Obstacle Avoidance of Autonomous Vehicles. Proc. ICVS, Vancouver, July

[37] Pellkofer M., Lützeler M., Dickmanns E.D. 2001 : Interaction of Perception and Gaze Control in Autonomous Vehicles. Proc. SPIE: Intelligent Robots and Computer Vision XX; Oct., Newton, USA, pp 1-12 Abstract (and Introd.)

[38] Pellkofer M., Lützeler M. and Dickmanns E.D. 2001: Vertebrate-type perception and gaze control for road vehicles. Proc. Int. Symp. on Robotics Research, Nov., Lorne, Australia pdf

[39] Gregor, R., Lützeler, M., Pellkofer, M., Siedersberger, K.H. and Dickmanns, E.D. 2002: EMS-Vision: A Perceptual System for Autonomous Vehicles. IEEE Trans. on Intelligent Transportation Systems, Vol.3, No.1, March, pp. 48 – 59

[40] Dickmanns E.D. 2002: Expectation-based, Multi-focal, Saccadic (EMS) Vision for Ground Vehicle Guidance. Control Engineering Practice 10 (2002), pp.907 - 915

[41] 2001/2002: Komplexes technisches Auge aus normalen CCD-Sensoren zur dynamischen Umgebungserfassung. In R.-J. Ahlers (Hrgb): 7. Symposium „Bildverarbeitung 2001“ (Veranstaltg verschoben auf Juni 2002), Techn. Akad. Esslingen, Nov. 2001, S. 163-172.

[42] Pellkofer M., Dickmanns E.D. 2002: Behavior Decision in Autonomous Vehicles. Proc. of the Int. Symp. on ‚Intell. Veh.‘02‘, Versailles, June

[43] Pellkofer M., Lützeler M. and Dickmanns E.D. 2003: Vertebrate-type interaction of perception and gaze control for autonomous road vehicles. In: Jarvis R.A. and Zelinski A.: Robotics Research, The Tenth International Symposium. Springer Verlag, pp.271-288

[44] Dickmanns E.D. 2003: Expectation-based, Multi-focal, Saccadic Vision - (Understanding dynamic scenes observed from a moving platform). In: Olver P.J., Tannenbaum A. (eds.): ‘Mathematical Methods in Computer Vision‘, Springer-Verlag, pp. 19-35

[45] Pellkofer M, Hofmann U., Dickmanns E.D. 2003: Autonomous cross-country driving using active vision. SPIE Conf. 5267, Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision. Photonics East, Providence, Rhode Island, Oct.

[46] Website www.dyna-vision.de (31310 VaMoRs MissionPerform IFF.htm)

 

 

Air vehicles

[47] Dickmanns E.D.; Eberl G. 1987: Automatischer Landeanflug durch maschinelles Sehen. Jahrestagung der DGLR (DGLR-Jahrbuch 1987), Berlin, pp 294-300

[48] Dickmanns E.D. 1988: Computer Vision for Flight Vehicles. Zeitschrift für Flugwissenschaft und Weltraumforschung (ZFW), Vol. 12 (88), pp 71-79. pdf (text excerpts)

[49] Schell F.-R.; Dickmanns E.D. 1989: Autonomous Automatic Landing through Computer Vision. AGARD Conference Proc. No. CP-455: Advances in Techniques and Technologies for Air Vehicle Navigation and Guidance, Lissabon, May, pp 24.1−24.9

[50] Dickmanns E.D.; Schell F.-R. 1992: Visual Autonomous Automatic Landing of Airplanes. AGARD Symp. on Advances in Guidance and Control of Precision Guided Weapons, Ottawa, May. pdf

[51] Schell F.-R.; Dickmanns E.D. 1992: Autonomous Landing of Airplanes by Dynamic Machine Vision. Proc. IEEE-Workshop on 'Applications of Computer Vision', Palm Springs, Nov/Dec

[52] Schell F.-R.; Dickmanns E.D. 1994: Autonomous Landing of Airplanes by Dynamic Machine Vision. Machine Vision and Application, Vol. 7, No. 3, pp 127-134

[53] Fürst S.; Werner S.; Dickmanns D.; Dickmanns E.D. 1997: Landmark navigation and autonomous landing approach with obstacle detection for aircraft. AeroSense ’97, SPIE Proc. Vol. 3088, Orlando FL, April 20-25, pp 94-105.

[54] Fürst S., Dickmanns E.D.: A vision based navigation system for autonomous aircraft. Robotics and Autonomous Systems 28, 1999, pp 173-184

[55] Werner S.; Buchwieser A.; Dickmanns E.D. 1995: Real-Time Simulation of Visual Machine Perception for Helicopter Flight Assistance. Proc. SPIE - Aero Sense, Orlando, FL, April

[56] Werner S.; Fürst S.; Dickmanns D.; Dickmanns E.D. 1996: A vision-based multi-sensor machine perception system for autonomous aircraft landing approach. Enhanced and Synthetic Vision AeroSense '96, SPIE, Vol. 2736, Orlando, FL, April, pp 54-63

[57] Fürst S., Werner S., Dickmanns D.; Dickmanns E.D. 1997: Landmark Navigation and Autonomous Landing Approach with Obstacle Detection for Aircraft. AGARD MSP Symp. on System Design Considerations for Unmanned Tactical Aircrcraft (UTA), Athens, Greece, October 7-9, pp 20-1 – 20-11 pdf

 

 

Space applications

[58] Dickmanns E.D.; Wünsche H.-J. 1985: Drehlage-Regelung eines Satelliten durch Echtzeit-Bildfolgenverarbeitung. In H. Niemann (ed): Mustererkennung 1985, Informatik Fachberichte 107, Springer-Verlag, pp 239-243

[59] Dickmanns E.D.; Wünsche H.-J. 1986: Satellite Rendezvous Maneuvers by Means of Computer Vision. Jahrestagung der DGLR, München, Okt. 1986. In: Jahrbuch 1986 Bd. 1 der DGLR, Bonn, pp 251-259.

[60] Dickmanns E.D.; Wünsche H.-J. 1986: Regelung mittels Rechnersehen. Automatisierungstechnik (at), 34 1/1986 pp. 16-22

[61] Fagerer C.; Dickmanns D.; Dickmanns E.D. 1994: Visual Grasping with Long Delay Time of a Free Floating Object in Orbit. 4th IFAC Symposium on Robot Control (SY.RO.CO.'94), Capri, Italy, pp 947-952 Abstract

 

 

Publications after retirement (general surveys, image feature extraction)

[62] Dickmanns E.D. 2001: Efficient Computation of Intensity Profiles for Real-Time Vision. Proc. Workshop ‚Robot Vision 2001‘, Auckland, Febr.

[63] 2001: Fahrzeuge lernen sehen. Broschüre (139 S.) und CD (mit ~ 40 Minuten Videoclips) zu 25 Jahren Forschung und Lehre an der UniBwM, Okt. Abstract (Cover, Inhalt)

[64] 2002: Sehende Fahrzeuge - UniBwM setzte jahrelang Maßstäbe. Hochschulkurier UniBwM, Nr 14 / April, S. 13 – 22.

[65] 2002: The development of machine vision for road vehicles in the last decade. Proc. of the Int. Symp. on ‚Intell. Veh.‘02‘, Versailles, June, pdf

[66] 2002: Vision for ground vehicles: history and prospects. Int. J. of Vehicle Autonomous Systems (IJVAS), Vol.1, No.1, pp. 1 – 44. Abstract

[67] 2002: Expectation-based, Multi-focal, Saccadic (EMS) Vision. NORSIG-2002. Proc. 5th Nordic Signal Processing Symposium, Tromsoe – Trondheim, Oct. 2002

[68] 2002: Zukünftige Wahrnehmungsfähigkeiten sehender Fahrzeuge. Proc. 11. Aachener Kolloquium ‘Fahrzeug- und Motorentechnik‘, Band 2, Okt. 2002, S. 1192-1204

[69] 2002: Neue Erklärungsmöglichkeit zur Frage des Geistes? Forschung & Lehre, 10/2002, S. 546-547

[70] 2003: Chapter 6.43.36 „Automation and Control in Traffic Systems“ in Encyclopedia of Life Support Systems (EOLSS), Eolss Publishers, Oxford, UK, 2003, [http://www.eolss.net]

[71] 2003: An advanced vision system for ground vehicles. Proc. Workshop ‚In Vehicle Cognitive Computer Vision System‘, ICVS Graz, April

[72] 2003: A General Cognitive System Architecture Based on Dynamic Vision for Motion Control. Proc. The 7th World Multi Conference on Systemics, Cybernetics and Informatics, (SCI) July, Orlando. Abstract

[73] 2004: Dynamic Vision Based Intelligence. AI-Magazine, Vol. 25, Nr. 2, Summer 2004. pp. 10-30. Abstract

[74] 2004: Three specific stages in visual perception for vertebrate-type dynamic machine vision. ACIVS, Brussels (CD).

[75] 2004: A Third-Generation Dynamic Vision System for Vehicle Guidance. NATO Research and Technology Agency, System Concepts and Integration (SCI) Panel, – Task Group 118. Rom, Oct. 2004 (Automation Technologies and Application Considerations for Highly Integrated Mission Systems)

[76] 2005: Vision: Von Assistenz zum autonomen Fahren. In Maurer und Stiller (Hrg).: ‚Fahrerassistenzsysteme mit maschineller Wahrnehmung’. Springer Verlag, (2005), pp. 203-237.

[77] Dickmanns E.D., Wuensche H.-J. 2005: (chapter 6): Advanced Sensing Techniques for Automated System Applications: Perception based on Dynamic Vision. Contribution to NATO-RTA, June

[78] 2006: Nonplanarity and efficient multiple feature extraction. Proc. Int. Conf. on Vision and Applications (Visapp), Setubal, Febr. 2006 (8 pages) pdf

[79] Dickmanns E.D. 2007: Dynamic Vision for Perception and Control of Motion. Springer-Verlag, April, (474 pages.) Abstract , Content

[80] 2008: Corner Detection with Minimal Effort on Multiple Scales. Int. Conf. on Vision and Applications (Visapp), Madeira, Jan. (6 pages) pdf

[81] 2008: Generalized Nonplanarity Features. UniBwM / LRT / TAS / TR 2008-08

[82] 2009: Detaillierte visuelle Umgebungserkennung durch vereinheitlichte Extraktion von Kanten, Ecken und linear schattierten Flecken. In Maurer und Stiller (Hrg): ‚Fahrerassistenzsysteme mit maschineller Wahrnehmung’. Springer Verlag,

[83] 2011: After-Dinner-Talk at AGI-11 in Mountain View (4th International Conference on ‘Artificial General Intelligence’, Google-Center, Aug.8), California. A slide- and video-clip show under the heading “Dynamic Vision as Key Element for Artificial General Intelligence”; displayed in YouTube with introduction (till 4:20 [min:sec]) and discussion (from 1:04 [h: min])

http://www.youtube.com/watch?v=YZ6nPhUG2i0

[84] 2012: Detailed Visual Recognition of Road Scenes for Guiding Autonomous Vehicles. pp. 225-244, in Chakraborty S. and Eberspächer J. (eds): Advances in Real-Time Systems, Springer, (355 pages)

[85] 2013: Maneuvers as Knowledge Elements for Vision and Control. IEEE-Workshop on Robot Motion Control, July 2013, Wasowo (Posen), pp. 42-47, Abstract , pdf

[86] 2015: BarVEye: Bifocal active gaze control for autonomous driving. VISAPP 2015, Berlin, March, pp. 428-436 (presented as posters) Abstract

[87] 2015: Knowledge Bases for visual Dynamic Scene Understanding. VISAPP 2015, Berlin, March, pp. 209-215, (presented as posters) Abstract

[88] 2015: Contributions to Visual Autonomous Driving. A Review.

a) Part I: Basic Approach to Real-time Computer Vision with Spatiotemporal Models (1977 ÷ 89) pdf

b) Part II: PROMETHEUS and the 2nd-generation System for Dynamic Vision (1987 ÷ 1996) pdf

c) Part III: Expectation-based, Multi-focal, Saccadic (EMS-) Vision (1997 ÷ 2004) pdf

[89] Dickmanns E.D. 2015: Buchbesprechung ‘Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte‘, Daimler Benz Stiftung, Hrsg.: M. Maurer, J. C. Gerdes, B. Lenz, H. Winner; Verlag Springer. pdf

[90] Dickmanns E.D. 2017: Entwicklung des Gesichtssinns für autonomes Fahren – Der 4-D Ansatz brachte 1987 den Durchbruch. In VDI-Berichte 2292: AUTOREG 2017, VDI Verlag GmbH (ISBN 978-3-18-092233-1 (S. 5-20) pdf

 

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: image002

 


 

Headings and Preface

 

1. Time line “Road vehicles with the sense of vision”, UniBwM

(When interested in 3rd-generation ‘EMS-vision’ only [1997 – 2003], use link from 2. and look at previous page from there.)

2. Some remarks to the historical background for digital real-time vision

3. “First‘s in real-time vision with the test vehicle VaMoRs” 1986 to 2003

4. Achievements with test vehicle VaMP 1994 – 2004

5. Time line “Air vehicles with the sense of vision” 1982 – 1997

6. Time line “Sense of vision in Outer Space” 1982 - 1994

7. Awards received

References

 

Preface

In the year 2015 two books on the subject of “Autonomous Driving” (resp. ‘Assistance Systems’ with a corresponding chapter) have appeared that do not mention numerous historic facts and that even contain claims proven to be false [89 Buchbesprechung]. Dickmanns had begun in 2013 (following the ‘Bertha-Benz’-autonomous-drive and its discussions in the media) to assemble a review of the efforts of UniBw Munich in this field, in order to document for the general public the pioneering contributions that have been the basis of the early successes of Daimler [88]. Since these comprehensive documentations will not be read by persons on higher management levels, a very condensed summary has also been written, of which the actual review is the English version; it contains cross-references to relevant publications and to the dissertations at UniBwM that treat the original contributions to extensive details.

Readers interested in these may find systematic discussions of the methods developed and used in application to ground vehicles over two and a half decades by our group in the book [79 Content]. In the Internet the field of machines (vehicles and robots) using our method of ‘dynamic vision’ and the 4-D approach is treated on this website; it contains detailed graphic descriptions and several dozen video-clips documenting real-world experiments.

 

A survey-video may be activated here: ‘Vehicles Learn To See’ (15 min)

 

A summarizing survey talk on the successful 4-D approach and the results achieved over three decades may be found under YouTube (Google TechTalk) with several video clips [83].

 

1. Time line “Road vehicles with the sense of vision”, UniBwM

 

year

Main activity

1977

Formulation of the long-term research goal ‘machine vision’ at UniBw Munich / LRT; development of a concept of the ‘Hardware-In-the-Loop’ (HIL) simulation facility for machine vision. Start procurement of components for the new building. [1 pdf]

1978

  Development of the window-concept for real-time image sequence processing and understanding. Screening of the industrial market for realization → negativ. Convince colleague V. Graefe of participation: He created a custom-designed system based on industrial micro-processors, the 8-bit system BVV1 with Intel 8085.

  Selection of the experimental system ‘pole balancing’ (Stab-Wagen-System SWS) as initial application of computer vision based on edge features (see cover top left). The Luenberger observer has been chosen as method for processing measurement data from the imaging process by perspective projection [3, D1].

1979

− Concept of the Satellite-Model-Plant with an ‚air-cushion-vehicle‘ hovering on an horizontal plate (of ~ 2 x 3 m) as second step towards machine vision based on corner features (see cover top right). [1 pdf, 59]

− Realization of the “Hardware-in-the-Loop” (HIL-) simulation facility with

  three-axes motion simulator for instruments and cameras,

  calligraphic vision simulation including projection onto a cylindrical screen,

  hybrid computer for real-time simulation of multi-body motion.

H.4.1 HIL-simulation

 

1980

Definition of a work plan towards real time machine vision [1 pdf].

 

1982

− First results with real-world components ‘BVV1 and pole balancingpresented at external conferences: 1. Karlsruhe, and 2. NATO Advanced Study Institute (ASI), Braunlage [3, 12 Excerpts pdf , 13].

First dissertation treating visual road vehicle guidance with HIL-simulation [D1 Kurzfassg].

First external funding (BMFT Information – Technology) of two researchers in the field of ‘machine vision’. This allowed allocating money from the basic funds of the Institute to

− start the investigation of aircraft landing approach by machine vision [D2 Kurzfassung].

− Initiate ‘satellite model plant’ with air-cushion vehicle (cover right)[59, D3].

1984

Positive results in visual road vehicle guidance with HIL-simulation lead to purchasing a 5-ton van Mercedes D-508 to be equipped as test vehicle for autonomous mobility and computer vision ‘VaMoRs’ with: A 220 V electrical power generator, actuators for steering, throttle and brakes as well as a standard twin industrial 19’’ electronics rack.

 

Video Skidpan DBAG Stuttgart

Dez. 1986

 

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: H:Webüberarbeitung Febr.2017NeuImAufbauAb8.2.17Stand0217.1254mainReviewTabelle2017_02181.Timeline1977-86_0220.1933-Dateienimage020.jpg

1986

First publications of results in visual guidance of road vehicles at higher speeds using differential-geometry-curvature models for road representation [4], and for simulated satellite rendezvous and docking [59].

First presentation of computer vision in road vehicles, to an international audience from the fields of ‘automotive engineering’ and ‘human – machine interface’ [5 pdf].

− Demo of VaMoRs in skid-pan of Daimler-Benz AG (DBAG), Stuttgart: Vmax = 10 m/s, lateral guidance by vision; longitudinal control by lateral acceleration [D4 Kurzfassg; 79 Content, page 214].

 


[The following pages are shown properly with a width of > 17 cm]

Year of several breakthroughs (see also [88a])

in real-time vision for guidance of (road) vehicles

year

Main activity

Formulation of the 4-D approach with spatiotemporal world models: The objects to be perceived are those in the real outside world; sensor signals are used to install and improve an internal representation (model) in the perceiving agent (dubbed ‘subject’ here, maybe either a biologic or a robotic one). [4; 7 pdf] Video ‘Autonomous driving in rain’

Perception is achieved by feedback of prediction errors for objects in the vicinity hypothesized for the point ‘Here & Now’. Subjects generate the internal representations according to a fusion of both sensor signals received (as well as features derived therefrom) and generic (parameterized) models available in their knowledge base [7, 8 pdf, 9].

Hypotheses are accepted as perceptions of real-world objects (and of course other subjects) if the sum of all quadratic prediction errors over several video frames remains small. This approach allows dealing with the last video image only (no storage of images required!); the information from all previous images is captured in the best estimated state variables and parameters of the hypotheses accepted. This was a tremendous practical advantage in the 1980s [12 Excerpts pdf].

Successful tests in high-speed driving on a free stretch of Autobahn over more than 20 km at speeds up to the maximum speed of VaMoRs of 96 km/h (60 mph) have been achieved in the summer. [D4 Kurzfassung; 79 Content, page 216]

Video VaMoRs on Autobahn 1987, no obstacles

1987/88 first BMFT-project with Daimler-Benz AG (DBAG) starts:

Autonom Mobile Systeme’ (AMS). Video ‘Obstacle on dirt road, 1988’

Machine vision becomes part of the 7-year PROMETHEUS-project: Pro-Art [6, 10 pdf]. Many European automotive companies and universities (~ 60) join the project.

 

 

Time line 1988 till 1992 (First-generation vision systems BVV2)


 

 

year

Main activity

 

 

 

1988

/

1989

   First summarizing publication on dynamic machine vision [12 pdf exc.]

   Stopping in front of a stationary obstacle with VaMoRs / Spurbus (DBAG) from speeds up to 40 km/h; final demo of AMS-project [14 pdf]. Video ‘Rastatt-demo 1988’

   Development of the superimposed linear curvature models for roads with horizontal & vertical curvatures in hilly terrain [D5 Kurzfassg; 79, section 9.2, Content]; Video HorVertCurvatureResults

   (First internat. TV-report BBC-Tomorrow’s World: ‘Self drive van’ video with BBC

   Invited Keynote on “The 4-D approach to real-time vision” at the Int. Joint Conf. on Artificial Intelligence (IJCAI) in Detroit [13] (video IJCAI).

   Definition of term “subject” for general objects with senses & actuators [15 pdf].

   Concept for simultaneous estimation of shape and motion parameters of ground vehicles [17 pdf; D10 Kurzfassg]; derivation of control time histories for a car visually observed.

1990

   Development of a general architecture for machine vision [16; 18; 20 Abstract (plus Introd.)].

1991

   UniBwM equips the 2nd vehicle of DBAG capable of vision, dubbed ‘Vision Information Technology Application’ (VITA, later VITA1), with a replica of Prof. Graefe’s BVV2 and with the improved software systems for perception & control. Video ‘Stop behind stationary car’.

   Visual autonomous ‘Convoy-driving’ at the Prometheus midterm-demo in Torino, Italy, with ‘VITA’ (see left-hand side) at speeds 0 < km/h < 60 [16].

   (Limited) Handling of crossroads at night with headlights only.

   The higher management levels of automotive industry start accepting computer vision as valid goal in the Prometheus project.

 

 

1992

   Decision for the ‘Common European Demonstrators’ (CED) of DBAG for the final demo of Prometheus in 1994 to be large sedans capable of driving in standard French Autoroute traffic with guests on board. Switch to ‘Transputers’ as processors for image sequence understanding. Video VanCarTracking Transputers’

   Invited contribution to special issue of IEEE-PAMI: ‘Interpretation of 3-D scenes’ 1992): Recursive 3-D Road and Relative Ego-State Recognition [21 Abstr+introd].

   1992 to 95: Together with defense industry equip a cross-country vehicle GD 300 (see image at left) with the sense of vision for handling unsealed roads.

   Video ‘Obscured trucks’ and Video ‘Gaze stabilization’

 

Time line 1992 till 1997 (Second-generation vision systems: Transputers)

 

year

Main activity

1993

DBAG and UniBwM acquire a Mercedes 500 SEL each for transformation into the CED for 1994: DBAG performs the mechanical works needed (conventional sensors added, 24 V additional power supply etc.) for both vehicles, while UniBwM is in charge of the bifocal vision systems for the front and the rear hemisphere: CED 302 = VITA-2 (DBAG) with additional cameras looking sideways, 3 guests; CED 303 = VaMoRs-PKW (short VaMP of UniBwM), 1 guest, for easier testing.

1994

Final demo PROMETHEUS & Int. Symp. on Intelligent Vehicles in Oct. Paris [23 a) pdf, 23b) Abstract]; performance shown by machine vision (without Radar and Lidar) in public three-lane traffic on Autoroute-1 near Paris with guests on board:

·    Free lane running up to the maximally allowed speed in France of 130 km/h.

·    Transition to convoy-driving behind an arbitrary vehicle in front with distance depending on speed driven;

·    Detection and tracking of up to six vehicles by bifocal vision both in the front and the rear hemisphere in the own lane and the directly neighboring ones.

·    Autonomous decision for lane change and autonomous realization, after the safety driver has agreed to the maneuver by setting the blink light. Video ‘Twofold lane change’

In total more than 1000 km have been driven by the two vehicles without accident on the three-lane Autoroutes around Paris.

1995

Our toughest international competitor CMU (under T. Kanade) had performed a long-distance demonstration in USA driving from the East- to the West Coast with lateral control performed autonomously during the summer of 1995 (see figure above) over multiple intervals, receiving much publicity.

Since in Europe the next generation of transputers failed to appear, UniBwM with DBAG switched from transputers to Motorola Power-PC with more than 10-fold computing performance; this allowed reducing the number of processors in their vision systems by a factor of five, while doubling evaluation rate to 25 Hz.

In response to the US-demonstration (above), VaMP (UniBwM) performed a long-distance test drive from Munich to a project meeting in Odense (Denmark) with many fully autonomous sections (both lateral and longitudinal guidance with binocular black-and-white vision relying on flexible edge extraction). About 95% of the distance tried has been driven autonomously, yielding > 1600 km in total. [D18 Kurzfassung]

The goal was finding those regions and components of the task promising best progress with the 3rd-generation vision system to be started next; in the northern plains, Vmax,auton ~ 180 km/h has been achieved [79, Section 9.4.2.5 Content].

Conclusions drawn: 1. Increase look-ahead range to about 200 m for lane recognition; 2. Introduce at least one color camera for differentiating between white and yellow lane markings at construction sites and for reading traffic signs. 3. Add a) region-based features for better object recognition, and b) precise gaze control for object tracking and inertial gaze stabilization. 4. Recognition of crossroads without in-advance-knowledge of parameters, determination of the angle of intersection and road width. Autonomous turn-offs (VaMoRs, in the defense realm) [24 pdf]. For details see: [88b pdf]

Video

Active Gaze Control

1996

Decision for ending cooperation with DBAG; development of longer-term concept for a high-level dynamic vision system: “Expectation-based, Multi-focal, Saccadic” (EMS-) Vision in a joint German – US American cooperation.

(Example in video left: Saccadic perception of a traffic sign while passing)

 

 

Time line 1997 till 2004 (Third-generation vision systems: COTS)

 

year

Main activity

1997

Development of 3rd-generation dynamic vision system with Commercial-Off-The-Shelf (COTS-) components: EMS-vision with all capabilities (developed at UniBwM separately over the last two decades) integrated together with first ‘real-time/full-frame stereo vision’ to be contributed by one American partner (former Sarnoff Research Center, SRI and Pyramid Vision Technology (PVT) Princeton, NJ) [25, 26]. The goal was flexible high-level vision without the need for much in-advance information (dubbed here: Scout-type vision). Video ‘Turn-off taxiway Nbb. On the higher system levels, two approaches were pursued in parallel:

1.      On the American side the so-called 4D/RCS-approach of Jim Albus, National Inst. of Standards and Technology (NIST), Gainsborough MD [27, 28] was used; system integration went up to the level of military command & control, implemented both in the HMMWV of NIST and in the test vehicles of the US-industrial partner Gen. Dynamics, the eXperimental Unmanned Vehicle (XUV).

2.      On the German side, EMS-vision with specific capabilities for a) perception, b) generation of object/subject hypotheses, c) situation assessment and decision making, and d) control computation and output. It has been implemented on a COTS-system with 4 x Dual Pentium_4 together with ‘Scalable Coherent Interface’ and 2 blocks of transputers as interface to real-time gaze- and vehicle control. After development of the system with VaMoRs it has been transferred by the German industrial partner Dornier GmbH to the tracked test vehicle Wiesel_2 (see left-hand side).

Video Wiesel2 on dirt road

1998

 

till

 

2000

Transformation of separate capabilities into the EMS-framework and integration of “Expectation-based, Multi-focal, Saccadic” (EMS) vision. Video ‘Ditch detection’

Simplified version in VaMP for bifocal “Hybrid Adaptive Cruise Control” [D30 Kurzfassung; 33e) pdf]

A more detailed

elaboration of this page may

be found under

link [88c) pdf].

 

See below for

details in

Mission performance

2001

Final demo with VaMoRs in Germany 2001 showed ‘Mission performance’ with 10 mission elements on test track Neubiberg (see below) [29 – 46, D25 – D30; KurzfassgKomponenten, KurzfassgFähigkeiten, KurzfassgFahrbahnerkennung, KurzfassgVerhaltensentscheidung]. The separate Sarnoff-PVT stereo system in 2001 had a volume of about 30 liter.

Video ‘EMS-turn-off, saccades’; Video ‘Passing a crossing, EMS-vision’.

USA: Congress defines as goal for 2015: 1/3 of ground vehicles for battling shall be capable of driving autonomously. {This triggered the Grand- und Urban Challenges of DARPA for the years 2004 to 2007.}

Video ‘Autonomous mission performance on-off-road with ditch avoidance’.“Scout-type” vision on network of minor roads: Video ‘Estimation of crossroad parameters’ (scout-type vision).

2003


 

2. Some remarks to the historical background for digital real-time vision

 

The following figure gives background information on the state of development of digital microprocessor hardware towards the end of last century. Before 1980, mobile digital real-time vision systems were simply impossible. Our custom-designed “window-system BVV 1” working on half-frames (video fields) at ~ 10 Hz (360 x 120 pixels) in 1981 had a clock rate of ~ 5 MHz. The 2nd-generation systems based on ‘transputers’ in the first half of the 1990s with 4 direct links to neighboring processors worked with around 20 MHz clock rate. Towards the end of the 1990s our 3rd-generation system based on Pentium-4, dubbed “Expectation-based, Multi-focal, Saccadic” (or in short EMS-) vision system approached a clock rate of 1 GHz (see center of the following figure within the green rectangle).

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Historical background of vision systems UBM 1980-2004

In the same time span of two decades the transistor count per microprocessor increased by a factor of ~ 1500 (center right in figure). The feature size on chips decreased by about two orders of magnitude (see lower right corner). This has led to the fact that the physical size of vision systems (characterized by the four bullets in the upper left corner) could be shrunk to fit into even small passenger cars.

Our vision systems did not need precise localization by other means or by highly precise maps. Due to perception of the environment by differential-geometry-models they were able to recognize objects in the environment sufficiently precise by feedback of prediction errors (dubbed “scout-type vision” here, see green angle in upper center of the figure). A new type of vision system has developed with the DARPA-Challenges 2002 till 2007 ( see brown angle in upper right corner of the figure): Each Challenge consisted of performing maintenance missions in well explored and prepared environments. The main sensor used was a revolving laser-range finder on top of the vehicles that provided (non-synchronized) distance images for the near range (~ 60 m) ten times per second. After synchronization anderror correction nice 3-D displays could be generated. For easy interpretation both highly precise maps with stationary objects and precise localization of the own position by GPS-sensors (supported by inertial and odometer signals) were required. Some groups have demonstrated mission performance without any video signal interpretation. Because of the heavy dependence on previously collected information on the environment, this approach is dubbed “confirmation-type vision” here (brown angle). The Google systems developed later and several other approaches world-wide followed this bandwagon track over more than a decade, though a need for more conventional vision with cameras has soon been appreciated and is on its way. Finally, computing power and sensors available in the not too far future will allow a merger of both approaches for flexible and efficient use in various environments.On rough ground and for special missions (e.g. exploratory, cross-country ralleys, military) active gaze control with a central field of view with high-resolution may become necessary. Vision systems allowing the complexity required have been discussed in [79 content chapter 12; 84; 85 pdf; 88c) pdf]

 

At UniBw Munich towards the end of the 1990’s a new branch of advanced studies beside Aero-Space Engineering (Luft- und Raumfahrttechnik, LRT) has been proposed in the framework of increasing the number of options for students in the second half of their studies: “Technology of Autonomous Systems” (Technik Autonomer Systeme, TAS), see page 123 of [63 Cover and Inhalt].A specific institute with the identical name has been founded early in the first decade of the new century; the corresponding head of these activities has been called in 2006: The former PhD-student Hans-Joachim Wuensche [D3 Kurzfassung] who had gained long industrial experience on all levels in the meantime.

He has built up the institute TAS and continues top level research in autonomous driving for

ground vehicles. Details may be found on the Website

 

http://www.unibw.de/lrt8/forschung

 


3. “First‘s in real-time vision with test vehicle VaMoRs” 1986 to 2003

 

 

Phase a) BVV2 (based on Intel 80x86, 5 MHz clock rate); cycle time: 4 video half-frames.

1986

First fully autonomous runs (lateral & longitudinal guidance by vision) video ‘VaMoRs on campus road’; Demo of these capabilities in skidpan of Daimler-Benz AG (DBAG) Stuttgart: (Vmax = 10 m/s = 36 km/h). [D4 Kurzfassung; 79, Section 7.3.3] video ‘Skidpan Daimler Stuttgart’

1987

Autonomous runs on free Autobahn: up to V = 96 km/h, [D4 Kurzfassung]

1988

Driving on roads of low order by night & rainfall; detection and stopping in front of large stationary obstacle from 40 km/h [D4, D5 Kurzfassg; 9, 14]

1989

Recognition of horizontal & vertical road curvature [D5 Kurzfassg, 16]

video HorVertCurvatureResults

1990

Driving on unsealed roads at speeds up to 50 km/h [D5; D12 Kurzfassung];

turning off onto previously unknown cross roads [D19 Kurzfassung]

1992

Certification for driving fully autonomous in general public traffic on all types of roads (with at least three persons on board)

 

Phase b) Transition to ‚Transputers‘: 20 MHz clock rate, cycle time 80 ms,

1993

Duplicate on transputer-system the capabilities developed. video 1992 VanCarTracking LaneChange Transputers’

1993

Gaze stabilization during brake maneuver [D16 Kurzfassg; D17 Kurzfassg]

video BrakingGazeStabilization

1994

Bifocal saccadic perception of a traffic sign at 50 km/h [D16 Kurzfassg]

video GazeControlTrafficSign_Saccade

 

Phase c) New Dual-Pentium_4 in COTS-system; clock rate around 700 MHz

1997

(trinocular/divergent) Stereo-range-estimation in near range [D24 Kurzfassung]

2000

Expectation-based, Multi-focal, Saccadic (EMS-) vision achieves first real-time demonstrations with flexibly integrated capabilities for performing complex missions [32 ext.abstract; 33a) pdf; to 33e), D29 Kurzfassung]. Video TurnLeftSacc DirtRoad

2001

Final demo of joint German-US-American project „AutoNav“ (flexible intelligent capabilities): Mission performance on robotic test site of UniBwM in Neubiberg; ‚On-& off-road driving‘ including avoidance of both positive and negative (lower than the driving surface) obstacles. Real-time / full-frame stereo vision by special processor system from SRI / PVT (Princeton) [34 to 46; D25 Kurzfassg; D26 Kurzfassg; D27 Kurzfassg; D28 Kurzfassg; D29 Kurzfassg; 79 Content , Chapter 14]. video ‘Mission performance on-off-road’

2003

Same mission with stereo-vision on a single European-standard plug-in

board into one of the PC

 

 


4. Achievements with test vehicle VaMP 1994 – 2004

Topic

year

Text and links

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage036.jpg

1994

Test vehicle of ‘Prometheus’-project (SEL 500) [23a pdf; 23b Abstract]: 2nd-generation transputer vision system with up to 60 processors; [DDInf Zus.fassg; D17 Kurzfassg, D18 Kurzfassg, D23 Kurzfassg; D28 Kurzfassg, D30 Kurzfassg; 88b pdf]; bifocal vision both to front and rear; autonomous lane change decisions on two-lane roads. See M.1 Visual perception

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsWeb2017VaMP perception 1994.jpg

1994

Convoy-driving on multi-lane highways [D12 Kurzfassg]: Monocular motion stereo with the assumption of flat ground [23a pdf to 23d]. Width of vehicles initially estimated (parameter adaptation, up to five vehicles in each hemisphere),then fixed for improved range estimation [D17 Kurzfassg]. Autoroute A1 in France: video ‘Twofold lane change’

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage043.png

1995

Long distance drive Munich−Odense [D18 Kurzfassg, D23]: Transputer system with Power-PC 601 processors for feature extraction (front hemisphere only) [D17 Kurzfassg]. Collect statistical data on performance of edge-based vision system to derive improvements required for the 3rd-generation system based on COTS-components.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage037.jpg

1995

High-speed road running (max 175 km/h) [D18 Kurzfassg]; Curvature-based perception of own & neighboring lanes; lateral ego-state including transition between lanes [D30 Kurzfassg].Large look-ahead ranges up to ~ 100 m.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage046.png

1998

 

to

 

2000

Horizontal and vertical road curvature

At larger ranges, even small vertical curvatures are important for higher precision in monocular range estimation; EMS-vision [D30 Kurzfassg]. Yellow bars display curvature effects relative to estimated position of the horizon. Video ‘2000 Hybrid Adaptive Cruise Control’

Bifocal vision complementing an industrial radar system for more robust distance keeping, EMS-vision system [D30 Kurzfassg].Longitudinal control fully autonomous, lateral control by human driver to keep him ‘in the loop’ for better personal attention.

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: C:UserseddDocumentsCD Mai++ 2016windows-Dateien23.5.2016Summary28.3.-ContribsEDDtoVision-basedAutVeh-Dateienimage047.jpg

2002

Wheel recognition (oblique views from rear) [D30 Kurzfassg].Most characteristic item for separating vehicles from other large box-like objects; yields improved estimates of the relative heading angle of vehicles. Video Wheel recognition’; this allows better situation assessment for lane changes.

 


5. Time line “Air vehicles with the sense of vision” 1982 – 1997

Pictorial symbolization

year

Text on Topic

1982

 

till

 

1987

G. Eberl: Derivation of basic modeling for motion in 3 dimensions with homogeneous coordinates (4 x 4 matrices based on quaternions); representation of landing strip as planar rectangle, and of ego-motion.

Edge extraction in the image and estimation of ego-state (3-D trajectory over time) relative to the landing strip. Scene interpretation by feedback of prediction errors: The validity of the approach has been proven for the special case without perturbations by stronger winds and gusts.

Dissertation Eberl 1987 [D2 Kurzfassg; 47; 48 pdf].

Video AircraftLandingApproach

Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: Beschreibung: H:Webüberarbeitung Febr.2017SchellLanfl1991.jpg

1987

 

till

 

1992

R. Schell: Inclusion of inertial sensors: Measuring perturbations from gusts with almost no delay time redundantly by 4 rate gyros and 4 accelerometers; handling of the different delay times in the optical and the inertial data paths. With this approach the validity of the method was proven also for strong perturbations in the HIL-simulation loop. This has led to DFG-funded fully autonomous test flights with BVV_2 onboard a Do-128 plane of TU-Brunswick at their airport (see left). These were performed till before touch-down; then ‚go-around‘ for the next test was done by the safety pilot [49 to 52]; see result in latter part of

video AircraftLandingApproach.

Dissertation Schell 1992 [D7 Kurzfassg], [50 pdf]

1992

 

till

 

1997

Transputer-system for image feature extraction; recognition of large obstacles on the landing strip during landing approach [53; 54]. S. Werner: Landmark navigation of helicopter close to the ground [55, 56, 57 pdf]; realistic HIL-simulation of the vicinity around the airport Brunswick (several km in extension). Mission with different trajectory elements; for the perception part see [DDInf Zusammenfassung]

Dissertation Werner 1997 [D22 Kurzfassg]

Video HelicopterMissionHIL-SimBs

 

See also: HelicoptersWithSenseOfVision

 


6. Time line “Sense of vision in Outer Space” 1982 - 1994

Pictorial symbolization

year

Text on topic

1982

 

 

till

 

 

1987

H.-J. Wünsche: Concept of model plant for „Satellite-Docking“ with air cushion vehicle (laboratory experiment for aero-space-technology). Two pairs of air jets for reaction control of planar motion. Perturbation-free air supply from the top. Use as subject for dissertation „Docking by machine vision“; extraction of corner features for recognizing docking partners (white) and the relative pose with BVV1 (8-bit microprocessors) plus PDP-computer. Sequence of maneuvers: 1. Approach, 2. find docking direction by circular go-around, 3. rendezvous with mechanical lock-in [58; 59; 60].

Dissertation Wünsche 1987

[D3 Kurzfassg] See video ‚Air cushion vehicle’

1989

 

 

till

 

 

1993

 

Project of the Deutsches Zentrum fuer Luft- und Raumfahrttechnik (Robotik im Weltraum, DLR-Oberpfaffenhofen) Prof. G. Hirzinger: Initial subcontract for the development of software for compensating the time delays in the tele-operation-loop „Robot-arm and cameras on board in orbit, and image evaluation in ground station on Earth“ (~ 6 to 7 sec. time lag).

 

The new transputer system with the 4-D approach allowed an alternative to the one chosen by DLR [DDInf Zusammenfassung]. After successful tests, the transputer-system got the first chance on May 2nd 1993 in the D2-Mission with Space-Shuttle Columbia. It became a full success (world-wide first) [61 Abstract].

Dissertation Fagerer 1996 [D20 Kurzfassg].

see video ROTEX

 

 


7. Awards received

 

1996 ‘Olympus’-award of the DAGM (Deutsche Arbeitsgemeinschaft für Mustererkennung) in (Heidelberg)

1997 Phillip Morris Forschungspreis for the development of the ‘4-D approach to dynamic machine vision and the applications demonstrated’ (Munich).

1998 Digi-Globe for contributions to ‘Vehicles capable of vision’, (Munich)

2006 Hella Engineering Award’ for ‘Contributions to road vehicles capable of vision’, (Lippstadt)

2016 Lifetime Achievement Award of IEEE-ITSS (Intelligent Transportation Systems Society) for ‘Pioneering Work in the Field of Autonomous Vehicles’, in Rio de Janeiro at ITS-Conference-2016, Nov. (jpg)

2017 Eduard Rhein Technologiepreis 2017‘ von der EDUARD-RHEIN-STIFTUNG

References

Dissertations mentored

D1 Meissner H.G. 1982: Steuerg dynamischer Systeme aufgrund bildhafter Informationen. Kurzfassung

D2 Eberl G. 1987: Automatischer Landeanflug durch Rechnersehen. Kurzfassung

D3 Wünsche H.-J. 1987: Erfassung und Steuerung von Bewegungen durch Rechnersehen. Kurzfassung.

D4 Zapp A. 1988: Automatische Straßenfahrzeugführung durch Rechnersehen. Kurzfassung

D5 Mysliwetz B. 1990: Parallelrechnerbasierte Bildfolgeninterpretation zur autonomen Fahrzeugführung. Kurzfassung

D6 Otto K.-D. 1990: Linear-quadratischer Entwurf mit Strukturvorgabe. Kurzfassung

D7 Schell F.-R. 1992: Bordautonomer automatischer Landeanflug aufgrund bildhafter und inertialer Meßdatenauswertung. Kurzfassung

D8 Uhrmeister B. 1992: Verbesserung der Lenkung eines Luft-Luft-Flugkörpers durch einen abbildenden Sensor. Kurzfassung

D10 Schick J. 1992: Gleichzeitige Erkenng von Form und Bewegung durch Rechnersehen. Kurzfassung

D11 Hock Ch. 1994: Wissensbasierte Fahrzeugführg mit Landmarken für autonome Roboter. Kurzfassung

D12 Brüdigam C. 1994: Intelligente Fahrmanöver sehender autonomer Fahrzeuge in autobahn-ähnlicher Umgebung. Kurzfassung

D13 Schmid M. 1994: 3-D-Erkenng von Fahrzeugen in Echtzeit aus monokularen Bildfolgen. Kurzfassung

D14 Kinzel W. 1994: Präattentive und attentive Bildverarbeitungsschritte zur visuellen Erkennung von Fußgängern. Kurzfassung

D15 Baader A. 1995: Ein Umwelterfassungssystem für multisensorielle Montageroboter.

D16 Schiehlen J. 1995: Kameraplattformen für aktiv sehende Fahrzeuge. Kurzfassung

D17 Thomanek F. 1996: Visuelle Erkennung und Zustandsschätzung von mehreren Straßenfahrzeugen zur autonomen Fahrzeugführung. Kurzfassung

D18 Behringer R. 1996: Visuelle Erkennung und Interpretation des Fahrspurverlaufes durch Rechnersehn für ein autonomes Straßenfahrzeug. Kurzfassung

D19 Müller N. 1996: Autonomes Manövrieren und Navigieren mit einem sehenden Fahrzeug. Kurzfassung

D20 Fagerer Ch. 1996: Automatische Teleoperation eines Tracking- und Greifvorgangs im Weltraum, basierend auf Bilddatenauswertung. Kurzfassung

D21 Schubert A. 1996: Synthese diskreter Zustandsregler durch eine Verbindung direkter und indirekter Methoden. Kurzfassung

D22 Werner S. 1997: Maschinelle Wahrnehmung für den bordautonomen automatischen Hubschrauberflug. Kurzfassung

DDInf Dickmanns Dirk 1997: Rahmensystem für die visuelle Wahrnehmung veränderlicher Szenen durch Computer. Diss. UniBw München, Fak. Informatik Zusammenfassung

D23 Maurer M. 2000: Flexible Automatisierung von Straßenfahrzeugen mit Rechnersehen. Kurzfassung

D24 Rieder A. 2000: Fahrzeuge sehen – Multisensorielle Fahrzeugerkennung in einem verteilten Rechnersystem für autonome Fahrzeuge. Kurzfassung

D25 Lützeler M. 2002: Fahrbahnerkennung zum Manövrieren auf Wegenetzen mit aktivem Sehen. Kurzfassung

D26 Gregor R. 2002: Fähigkeiten zur Missionsdurchführung und Landmarkennavigation. Kurzfassung

D27 Pellkofer M. 2003: Verhaltensentscheidung für autonome Fahrzeuge mit Blickrichtungssteuerung. Kurzfassung

D28 Von Holt V. 2004: Integrale Multisensorielle Fahrumgebungserfassung nach dem 4-D Ansatz. Kurzfassung

D29 Siedersberger K.H. 2004: Komponenten zur automatischen Fahrzeugführung in sehenden (semi-) autonomen Fahrzeugen. Kurzfassung

D30 Hofmann U. 2004: Zur visuellen Umfeldwahrnehmung autonomer Fahrzeuge. Kurzfassung

 

Ground vehicles

[1] Dickmanns E.D. 1980: Untersuchung und mögliche Arbeitsschritte zum Thema „Künstliche Intelligenz: Rechnersehen und –steuerung dynamischer Systeme“. HsBwM / LRT / WE 13 / IB 80-1 (18 S.) textpdf

[2] Meissner H.G. 1982: Steuerung dynamischer Systeme aufgrund bildhafter Informationen. Kurzfassung

[3] Meissner HG; Dickmanns ED 1983: Control of an Unstable Plant by Computer Vision. In T.S. Huang (ed): Image Sequence Processing and Dynamic Scene Analysis. Springer, Berlin, pp 532-548

[4] Dickmanns E.D., Zapp A. 1986: A Curvature-based Scheme for Improving Road Vehicle Guidance by Computer Vision. In: 'Mobile Robots', SPIE Proc. Vol. 727, Cambridge, MA, pp 161-168

[5] Dickmanns E.D. 1986: Computer Vision in Road Vehicles – Chances and Problems. ICTS-Symp. on "Human Factors Technology for Next-Generation Transportation Vehicles", Amalfi, Italy. Abstract, pdf

[6] 1987a: State of the art review for ‘PRO-ART, 11 170 Integrated approaches’. UniBw M/LRT/WE 13/IB/87-2 (limited distribution)

[7] 1987b: 4-D-Dynamic Scene Analysis with Integral Spatio-Temporal Models. 4th Int. Symp. on Robotics Research, Santa Cruz. In: Bolles RC; Roth B 1988. Robotics Research, MIT Press, Cambridge, pp 311-318. pdf

[8] , Zapp A.1987c: Autonomous High Speed Road Vehicle Guidance by Computer Vision. 10th IFAC World Congress Munich, Preprint Vol. 4, 1987, pp 232-237. pdf

[9] 1987d:Object Recognition and Real-Time Relative State Estimation Under Egomotion. NATO Advanced Study Institute, Maratea, Italy (Aug./Sept) In: A.K. Jain (ed): Real-Time Object Measurement and Classification. Springer-Verlag, Berlin, 1988, pp 41-56 Abstract

[10] Dickmanns E.D.; Graefe V.; Niegel W. 1987: Abschlussbericht Definitionsphase PROMETHEUS Pro-Art, pp. 1-18. pdf

[11] Roland A., Shiman P. 2002: Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993. MIT Press

[12] Dickmanns E.D.; Graefe V. 1988: a) Dynamic monocular machine vision. Machine Vision and Applications, Springer International, Vol. 1, pp 223-240. b) Applications of dynamic monocular machine vision. (ibid), pp 241-261 Abstract , Excerpts pdf

[13] Dickmanns E.D. 1989. Detroit, Aug. 23. Invited keynote talk, IJCAI on ‚The 4-D approach to real-time vision’ (11:10 – 12:40 a.m.); including several videos with experimental results.

[14] Dickmanns E.D., Christians T. 1989: Relative 3-D-state Estimation for Autonomous Visual Guidance of Road Vehicles. In T. Kanade et al (eds): 'Intelligent Autonomous Systems 2', Amsterdam, Dec. Vol. 2, pp 683-693; also appeared in: Robotics and Autonomous Systems 7 (1991), Elsevier Science Publ., pp 113-123 pdf

[15] Dickmanns E.D. 1989: Subject-Object Discrimination in 4-D Dynamic Scene Interpretation by Machine Vision. Proc. IEEE-Workshop on Visual Motion, Newport Beach, pp 298-304. pdf

[16] Dickmanns E.D.; Mysliwetz B.; Christians T. 1990: Spatio-Temporal Guidance of Autonomous Vehicles by Computer Vision. IEEE-Transactions on Systems, Man and Cybernetics, Vol. 20, No. 6, Special Issue on Unmanned Vehicles and Intelligent Robotic Systems, pp 1273−1284.

[17] Schick J.; Dickmanns E.D. 1991: Simultaneous Estimation of 3-D Shape and Motion of Objects by Computer Vision. IEEE Workshop on Visual Motion, Princeton, N.J., pdf

[18] Dickmanns E.D.1988a: An Integrated Approach to Feature Based Dynamic Vision. Int. Conference on Vision and Pattern Recognition (CVPR), Ann Arbor, 1988, pp 820-825.

[19] 1991: Dynamic vision for locomotion control – An evolutionary path to intelligence. CCG-lecture. pdf

[20] Dickmanns E.D.1992: A General Dynamic Vision Architecture for UGV and UAV. Journal of Applied Intelligence 2, pp. 251-270 Abstract (plus Introd.)

[21] Dickmanns E.D.; Mysliwetz B. 1992: Recursive 3-D Road and Relative Ego-State Recognition. IEEE-Transactions PAMI, Vol. 14, No. 2, Special Issue on 'Interpretation of 3-D Scenes', Feb. pp 199-213 Abstract (plus Introd.)

[22] Dickmanns E.D.1993: Expectation-Based Dynamic Scene Understanding. In A. Blake and A. Yuille (eds): 'Active Vision', MIT Press, Cambridge, Mass., 1993, pp. 303-335

[23] Four contributions in Masaki (ed) 1994: Proc. of Int. Symp. on Intelligent Vehicles '94, Paris, Oct.

a) Dickmanns E.D.; Behringer R.; Dickmanns D.; Hildebrandt T.; Maurer M.; Thomanek F.; Schiehlen J.: The Seeing Passenger Car 'VaMoRs-P'. pp 68-73 Abstract , pdf

b) Thomanek F.; Dickmanns E.D.; Dickmanns D.: Multiple Object Recognition and Scene Interpretation for Autonomous Road Vehicle Guidance. pp. 231-236 Abstract

c) Von Holt: Tracking and Classification of Overtaking Vehicles on Autobahnen. pp 314-319

d) Schiehlen J.; Dickmanns E.D.: A Camera Platform for Intelligent Vehicles. pp 393-398

[24] Dickmanns E.D.; Müller N. 1995: Scene Recognition and Navigation Capabilities for Lane Changes and Turns in Vision-Based Vehicle Guidance. Control Engineering Practice, 2nd IFAC Conf. on Intelligent Autonomous Vehicles-95, Helsinki. pdf

[25] Mandelbaum R., Hansen M., Burt P., Baten S. 1998: Vision for Autonomous Mobility: Image Processing on the VFE-200. In: IEEE International Symposium on ISIC, CIRA and ISAS

[26] Baten S.; Lützeler M.; Dickmanns E.D.; Mandelbaum R.; Burt P. 1998: Techniques for Autonomous Off-Road Navigation. IEEE Intelligent Systems, Vol. 13, No. 6, pp 57-65

[27] Albus J.S., 2000: 4-D/RCS reference model architecture for unmanned ground vehicles. Proc. of the International Conference on Robotics and Automation, San Francisco, April 24-27

[28] Albus J.S., Meystel A.M. 2001: Engineering of Mind. – An Introduction to the Science of Intelligent Systems. Wiley Series on Intelligent Systems

[29] Dickmanns, E.D. 1997: Vehicles Capable of Dynamic Vision. Proc. 15th International Joint Conference on Artificial Intelligence (IJCAI-97), Vol. 2, Nagoya, Japan, pp 1577-1592 Abstract (with Introduction)

[30] Dickmanns E.D.1998: Expectation-based, Multi-focal, Saccadic Vision for Perceiving Dynamic Scenes (EMS-Vision). In C. Freska (ed.): Proc. in Artificial Intelligence, Vol. 8, pp 47-54

[31] Dickmanns E.D., Wuensche H.-J. 1999: Dynamic Vision for Perception and Control of Motion. In: B. Jaehne, H. Haußenecker and P. Geißler (eds.) Handbook of Computer Vision and Applications, Vol. 3, Academic Press, pp 569-620 Content (and Introd.)

[32] Dickmanns E.D. 1999: An Expectation-based, Multi-focal, Saccadic (EMS) Vision System for Vehicle Guidance. In Hollerbach and Koditschek (eds.): ‚Robotics Research‘ (The Ninth Symposium), Springer-Verlag, Extended_abstract

[33] Five contributions to EMS-Vision in the Proceedings of the Internat. Symposium on Intelligent Vehicles (IV’2000), Dearborn, (MI, USA), Oct. 4-5:

a) Gregor, R., Lützeler, M., Pellkofer, M., Siedersberger, K.H. and Dickmanns, E.D.: EMS-Vision: A Perceptual System for Autonomous Vehicles. pp 52-57 pdf

b) Pellkofer, M., Dickmanns, E.D.: EMS-Vision: Gaze Control in Autonomous Vehicles. pp 296-301 pdf

c) Lützeler, M. und Dickmanns, E.D.: EMS-Vision: Recognition of Intersections on Unmarked Road Networks. pp 302-307 pdf

d) Gregor, R., Dickmanns, E.D.: EMS-Vision: Mission Performance on Road Networks. pp 468-473; pdf

e) Hofmann, U.; Rieder, A., Dickmanns, E.D.: EMS-Vision: Application to Hybrid Adaptive Cruise Control. pp 468-473 pdf

f) Siedersberger K.-H., Dickmanns E.D: EMS-Vision: Enhanced Abilities for Locomotion. pdf

[34] Gregor R., Lützeler M., Dickmanns E.D. 2001: EMS-Vision: Combining on- and off-road driving. Proc. SPIE Conf. on Unmanned Ground Vehicle Technology III, AeroSense ‘01, Orlando (FL), April 16-17, pp. 329-440 Abstract (and part of Introd.)

[35] Gregor R., Lützeler M., Pellkofer M., Siedersberger K.-H., Dickmanns E.D. 2001: A Vision System for Autonomous Ground Vehicles with a Wide Range of Maneuvering Capabilities. Proc. ICVS, Vancouver, July

[36] Siedersberger K.-H.; Pellkofer M., Lützeler M., Dickmanns E.D., Rieder A., Mandelbaum R., Bogoni I. 2001: Combining EMS-Vision and Horopter Stereo for Obstacle Avoidance of Autonomous Vehicles. Proc. ICVS, Vancouver, July

[37] Pellkofer M., Lützeler M., Dickmanns E.D. 2001 : Interaction of Perception and Gaze Control in Autonomous Vehicles. Proc. SPIE: Intelligent Robots and Computer Vision XX; Oct., Newton, USA, pp 1-12 Abstract (and Introd.)

[38] Pellkofer M., Lützeler M. and Dickmanns E.D. 2001: Vertebrate-type perception and gaze control for road vehicles. Proc. Int. Symp. on Robotics Research, Nov., Lorne, Australia pdf

[39] Gregor, R., Lützeler, M., Pellkofer, M., Siedersberger, K.H. and Dickmanns, E.D. 2002: EMS-Vision: A Perceptual System for Autonomous Vehicles. IEEE Trans. on Intelligent Transportation Systems, Vol.3, No.1, March, pp. 48 – 59

[40] Dickmanns E.D. 2002: Expectation-based, Multi-focal, Saccadic (EMS) Vision for Ground Vehicle Guidance. Control Engineering Practice 10 (2002), pp.907 - 915

[41] 2001/2002: Komplexes technisches Auge aus normalen CCD-Sensoren zur dynamischen Umgebungserfassung. In R.-J. Ahlers (Hrgb): 7. Symposium „Bildverarbeitung 2001“ (Veranstaltg verschoben auf Juni 2002), Techn. Akad. Esslingen, Nov. 2001, S. 163-172.

[42] Pellkofer M., Dickmanns E.D. 2002: Behavior Decision in Autonomous Vehicles. Proc. of the Int. Symp. on ‚Intell. Veh.‘02‘, Versailles, June

[43] Pellkofer M., Lützeler M. and Dickmanns E.D. 2003: Vertebrate-type interaction of perception and gaze control for autonomous road vehicles. In: Jarvis R.A. and Zelinski A.: Robotics Research, The Tenth International Symposium. Springer Verlag, pp.271-288

[44] Dickmanns E.D. 2003: Expectation-based, Multi-focal, Saccadic Vision - (Understanding dynamic scenes observed from a moving platform). In: Olver P.J., Tannenbaum A. (eds.): ‘Mathematical Methods in Computer Vision‘, Springer-Verlag, pp. 19-35

[45] Pellkofer M, Hofmann U., Dickmanns E.D. 2003: Autonomous cross-country driving using active vision. SPIE Conf. 5267, Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision. Photonics East, Providence, Rhode Island, Oct.

[46] Website www.dyna-vision.de (31310 VaMoRs MissionPerform IFF.htm)

 

 

Air vehicles

[47] Dickmanns E.D.; Eberl G. 1987: Automatischer Landeanflug durch maschinelles Sehen. Jahrestagung der DGLR (DGLR-Jahrbuch 1987), Berlin, pp 294-300

[48] Dickmanns E.D. 1988: Computer Vision for Flight Vehicles. Zeitschrift für Flugwissenschaft und Weltraumforschung (ZFW), Vol. 12 (88), pp 71-79. pdf (text excerpts)

[49] Schell F.-R.; Dickmanns E.D. 1989: Autonomous Automatic Landing through Computer Vision. AGARD Conference Proc. No. CP-455: Advances in Techniques and Technologies for Air Vehicle Navigation and Guidance, Lissabon, May, pp 24.1−24.9

[50] Dickmanns E.D.; Schell F.-R. 1992: Visual Autonomous Automatic Landing of Airplanes. AGARD Symp. on Advances in Guidance and Control of Precision Guided Weapons, Ottawa, May. pdf

[51] Schell F.-R.; Dickmanns E.D. 1992: Autonomous Landing of Airplanes by Dynamic Machine Vision. Proc. IEEE-Workshop on 'Applications of Computer Vision', Palm Springs, Nov/Dec

[52] Schell F.-R.; Dickmanns E.D. 1994: Autonomous Landing of Airplanes by Dynamic Machine Vision. Machine Vision and Application, Vol. 7, No. 3, pp 127-134

[53] Fürst S.; Werner S.; Dickmanns D.; Dickmanns E.D. 1997: Landmark navigation and autonomous landing approach with obstacle detection for aircraft. AeroSense ’97, SPIE Proc. Vol. 3088, Orlando FL, April 20-25, pp 94-105.

[54] Fürst S., Dickmanns E.D.: A vision based navigation system for autonomous aircraft. Robotics and Autonomous Systems 28, 1999, pp 173-184

[55] Werner S.; Buchwieser A.; Dickmanns E.D. 1995: Real-Time Simulation of Visual Machine Perception for Helicopter Flight Assistance. Proc. SPIE - Aero Sense, Orlando, FL, April

[56] Werner S.; Fürst S.; Dickmanns D.; Dickmanns E.D. 1996: A vision-based multi-sensor machine perception system for autonomous aircraft landing approach. Enhanced and Synthetic Vision AeroSense '96, SPIE, Vol. 2736, Orlando, FL, April, pp 54-63

[57] Fürst S., Werner S., Dickmanns D.; Dickmanns E.D. 1997: Landmark Navigation and Autonomous Landing Approach with Obstacle Detection for Aircraft. AGARD MSP Symp. on System Design Considerations for Unmanned Tactical Aircrcraft (UTA), Athens, Greece, October 7-9, pp 20-1 – 20-11 pdf

 

 

Space applications

[58] Dickmanns E.D.; Wünsche H.-J. 1985: Drehlage-Regelung eines Satelliten durch Echtzeit-Bildfolgenverarbeitung. In H. Niemann (ed): Mustererkennung 1985, Informatik Fachberichte 107, Springer-Verlag, pp 239-243

[59] Dickmanns E.D.; Wünsche H.-J. 1986: Satellite Rendezvous Maneuvers by Means of Computer Vision. Jahrestagung der DGLR, München, Okt. 1986. In: Jahrbuch 1986 Bd. 1 der DGLR, Bonn, pp 251-259.

[60] Dickmanns E.D.; Wünsche H.-J. 1986: Regelung mittels Rechnersehen. Automatisierungstechnik (at), 34 1/1986 pp. 16-22

[61] Fagerer C.; Dickmanns D.; Dickmanns E.D. 1994: Visual Grasping with Long Delay Time of a Free Floating Object in Orbit. 4th IFAC Symposium on Robot Control (SY.RO.CO.'94), Capri, Italy, pp 947-952 Abstract

 

 

Publications after retirement (general surveys, image feature extraction)

[62] Dickmanns E.D. 2001: Efficient Computation of Intensity Profiles for Real-Time Vision. Proc. Workshop ‚Robot Vision 2001‘, Auckland, Febr.

[63] 2001: Fahrzeuge lernen sehen. Broschüre (139 S.) und CD (mit ~ 40 Minuten Videoclips) zu 25 Jahren Forschung und Lehre an der UniBwM, Okt. Abstract (Cover, Inhalt)

[64] 2002: Sehende Fahrzeuge - UniBwM setzte jahrelang Maßstäbe. Hochschulkurier UniBwM, Nr 14 / April, S. 13 – 22.

[65] 2002: The development of machine vision for road vehicles in the last decade. Proc. of the Int. Symp. on ‚Intell. Veh.‘02‘, Versailles, June, pdf

[66] 2002: Vision for ground vehicles: history and prospects. Int. J. of Vehicle Autonomous Systems (IJVAS), Vol.1, No.1, pp. 1 – 44. Abstract

[67] 2002: Expectation-based, Multi-focal, Saccadic (EMS) Vision. NORSIG-2002. Proc. 5th Nordic Signal Processing Symposium, Tromsoe – Trondheim, Oct. 2002

[68] 2002: Zukünftige Wahrnehmungsfähigkeiten sehender Fahrzeuge. Proc. 11. Aachener Kolloquium ‘Fahrzeug- und Motorentechnik‘, Band 2, Okt. 2002, S. 1192-1204

[69] 2002: Neue Erklärungsmöglichkeit zur Frage des Geistes? Forschung & Lehre, 10/2002, S. 546-547

[70] 2003: Chapter 6.43.36 „Automation and Control in Traffic Systems“ in Encyclopedia of Life Support Systems (EOLSS), Eolss Publishers, Oxford, UK, 2003, [http://www.eolss.net]

[71] 2003: An advanced vision system for ground vehicles. Proc. Workshop ‚In Vehicle Cognitive Computer Vision System‘, ICVS Graz, April

[72] 2003: A General Cognitive System Architecture Based on Dynamic Vision for Motion Control. Proc. The 7th World Multi Conference on Systemics, Cybernetics and Informatics, (SCI) July, Orlando. Abstract

[73] 2004: Dynamic Vision Based Intelligence. AI-Magazine, Vol. 25, Nr. 2, Summer 2004. pp. 10-30. Abstract

[74] 2004: Three specific stages in visual perception for vertebrate-type dynamic machine vision. ACIVS, Brussels (CD).

[75] 2004: A Third-Generation Dynamic Vision System for Vehicle Guidance. NATO Research and Technology Agency, System Concepts and Integration (SCI) Panel, – Task Group 118. Rom, Oct. 2004 (Automation Technologies and Application Considerations for Highly Integrated Mission Systems)

[76] 2005: Vision: Von Assistenz zum autonomen Fahren. In Maurer und Stiller (Hrg).: ‚Fahrerassistenzsysteme mit maschineller Wahrnehmung’. Springer Verlag, (2005), pp. 203-237.

[77] Dickmanns E.D., Wuensche H.-J. 2005: (chapter 6): Advanced Sensing Techniques for Automated System Applications: Perception based on Dynamic Vision. Contribution to NATO-RTA, June

[78] 2006: Nonplanarity and efficient multiple feature extraction. Proc. Int. Conf. on Vision and Applications (Visapp), Setubal, Febr. 2006 (8 pages) pdf

[79] Dickmanns E.D. 2007: Dynamic Vision for Perception and Control of Motion. Springer-Verlag, April, (474 pages.) Abstract , Content

[80] 2008: Corner Detection with Minimal Effort on Multiple Scales. Int. Conf. on Vision and Applications (Visapp), Madeira, Jan. (6 pages) pdf

[81] 2008: Generalized Nonplanarity Features. UniBwM / LRT / TAS / TR 2008-08

[82] 2009: Detaillierte visuelle Umgebungserkennung durch vereinheitlichte Extraktion von Kanten, Ecken und linear schattierten Flecken. In Maurer und Stiller (Hrg): ‚Fahrerassistenzsysteme mit maschineller Wahrnehmung’. Springer Verlag,

[83] 2011: After-Dinner-Talk at AGI-11 in Mountain View (4th International Conference on ‘Artificial General Intelligence’, Google-Center, Aug.8), California. A slide- and video-clip show under the heading “Dynamic Vision as Key Element for Artificial General Intelligence”; displayed in YouTube with introduction (till 4:20 [min:sec]) and discussion (from 1:04 [h: min])

http://www.youtube.com/watch?v=YZ6nPhUG2i0

[84] 2012: Detailed Visual Recognition of Road Scenes for Guiding Autonomous Vehicles. pp. 225-244, in Chakraborty S. and Eberspächer J. (eds): Advances in Real-Time Systems, Springer, (355 pages)

[85] 2013: Maneuvers as Knowledge Elements for Vision and Control. IEEE-Workshop on Robot Motion Control, July 2013, Wasowo (Posen), pp. 42-47, Abstract , pdf

[86] 2015: BarVEye: Bifocal active gaze control for autonomous driving. VISAPP 2015, Berlin, March, pp. 428-436 (presented as posters) Abstract

[87] 2015: Knowledge Bases for visual Dynamic Scene Understanding. VISAPP 2015, Berlin, March, pp. 209-215, (presented as posters) Abstract

[88] 2015: Contributions to Visual Autonomous Driving. A Review.

a) Part I: Basic Approach to Real-time Computer Vision with Spatiotemporal Models (1977 ÷ 89) pdf

b) Part II: PROMETHEUS and the 2nd-generation System for Dynamic Vision (1987 ÷ 1996) pdf

c) Part III: Expectation-based, Multi-focal, Saccadic (EMS-) Vision (1997 ÷ 2004) pdf

[89] Dickmanns E.D. 2015: Buchbesprechung ‘Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte‘, Daimler Benz Stiftung, Hrsg.: M. Maurer, J. C. Gerdes, B. Lenz, H. Winner; Verlag Springer. pdf

[90] Dickmanns E.D. 2017: Entwicklung des Gesichtssinns für autonomes Fahren – Der 4-D Ansatz brachte 1987 den Durchbruch. In VDI-Berichte 2292: AUTOREG 2017, VDI Verlag GmbH (ISBN 978-3-18-092233-1 (S. 5-20) pdf