Vision-based SLAM and moving objects tracking for a smart walker platform



Brief description

The problems of vision-based detection and tracking of independently moving objects, localization and map construction are highly interrelated, in the sense that the solution of any of them provides valuable information to the solution of the others. In this paper, rather than trying to solve each of them in isolation, we propose a method that treats all of them simultaneously. More specifically, given visual input acquired by a moving RGBD camera, the method detects independently moving objects and tracks them in time. Additionally, the method estimates the camera (ego)motion and the motion of the tracked objects in a coordinate system that is attached to the static environment, a map of which is progressively built from scratch. The loose assumptions that the method adopts with respect to the problem parameters make it a valuable component for any robotic platform that moves in a dynamic environment and requires simultaneous tracking of moving objects, egomotion estimation and map construction. The usability of the method is further enhanced by its robustness and its low computational requirements that permit real time execution even on low-end CPUs.

Top left: an RGBD frame. Top middle: The current map of the environment. Top right: A sparse set of 3D points used for egomotion estimation. Bottom left: registration of the whole point cloud with the current map of the environment. Bottom middle: a top view showing the camera position, the local environment map and the independently moving objects. Bottom right: The point cloud of the foreground moving object.

Flow diagram of the proposed method.


Sample results

Left: Segmentation and tracking. In the floor plan view (right image), background objects are shown in purple and foreground objects in green. Foreground objects are segmented (bottom left) and tagged. The RGBD camera is mounted on a prototype smart walker during trials. The current location of the camera is on the center of the blue cross. Right: The resulting point cloud (with color) created from a sequence grabbed using the prototype smart walker. The method produces an accurate model of the environment.

(a) A floormap of an environment built in the absence of moving objects. The estimated platform trajectory shown in green. (b) The map of the environment that was built in presence of moving objects. Camera (green) and moving object (orange) trajectories are also shown.




Contributors

Paschalis Panteleris, Antonis Argyros.


Relevant publications

  • P. Panteleris, A.A. Argyros “Vision-based SLAM and moving objects tracking for the perceptual support of a smart walker platform”, Workshop on Assistive Computer Vision and Robotics (ACVR 2014), in conjunction with ECCV 2014, Zurich, Switzerland, Sep. 12, 2014.

The electronic versions of the above publications can be downloaded from my publications page.