From multiple views to textured 3D meshes: a GPU-powered approach



Brief description

We propose a new approach for vision-based gesture recognition to support robust and efficient human robot interaction We present work on exploiting modern graphics hardware towards the real-time production of a textured 3D mesh representation of a scene observed by a multicamera system. The employed computational infrastructure consists of a network of four PC workstations each of which is connected to a pair of cameras. One of the PCs is equipped with a GPU that is used for parallel computations. The result of the processing is a list of texture mapped triangles representing the reconstructed surfaces. In contrast to previous works, the entire processing pipeline (foreground segmentation, 3D reconstruction, 3D mesh computation, 3D mesh smoothing and texture mapping) has been implemented on the GPU. Experimental results demonstrate that an accurate, high resolution, texture-mapped 3D reconstruction of a scene observed by eight cameras is achievable in real time.


Sample results

A video with online and offline 3D reconstruction experiments.


Contributors

  • Konstantinos Tzevanidis, Xenophon Zabulis, Thomas Sarmis, Panagiotis Koutlemanis, Nikolaos Kyriazis, Antonis Argyros
  • This work was partially supported by the IST-FP7-IP-215821 project GRASP

Relevant publications

  • K. Tzevanidis, X. Zabulis, T. Sarmis, P. Koutlemanis, N. Kyriazis, A.A. Argyros, “From multiple views to textured 3D meshes: a GPU-powered approach”, in Proceedings of the Computer Vision on GPUs Workshop, CVGPU’2010, In conjunction with ECCV’2010, Heraklion, Crete, Greece, 10 September 2010.

The electronic versions of the above publications can be downloaded from my publications page.