We present a method for vision-based, reactive robot navigation that enables a robot to move in the middle of the free space by exploiting both central and peripheral vision. The robot employs a forward-looking camera for central vision and two side-looking cameras for sensing the periphery of its visual field. The developed method combines the information acquired by this trinocular vision system and produces low-level motor commands that keep the robot in the middle of the free space. The approach follows the purposive vision paradigm in the sense that vision is not studied in isolation but in the context of the behaviors that the system is engaged as well as the environment and the robot's motor capabilities. It is demonstrated that by taking into account these issues, vision processing can be drastically simplified, still giving rise to quite complex behaviors. The proposed method does not make strict assumptions about the environment, requires very low level information to be extracted from the images, produces a robust robot behavior and is computationally efficient. Results obtained by both simulations and from a prototype on-line implementation demonstrate the effectiveness of the method.
If you are interested in this topic, you may also find interesting the extension of this method that employs panoramic vision instead of a trinocular vision system.
Schematic and actual placement of peripheral cameras
Download video with simulation experiments
Download video showing the behavior of Charlie (CVAP, KTH), as it moves under the proposed control strategy
Antonis Argyros, Fredrik Bergholm.
The electronic versions of the above publications can be downloaded from my publications page.