Panos Trahanias, Professor
Home Brief CV Research Publications Projects Courses About Me
Research Highlights




Human-Robot Interaction
  • Interaction with mobile robots

  • Research in Human-Robot Interaction has mainly addressed visual competences involved in interaction scenarios. Most recent work has been pursued within the INDIGO research project, which I coordinated.

    INDIGO was an EC funded research project that finished in January 2010. The goal of INDIGO was to develop technology to facilitate the advancement of human-robot interaction. This was achieved both by enabling robots to perceive natural human behavior as well as by making them act in ways that are familiar to humans.





    To download a short english version of the video please click here


  • Articulated body pose tracking

  • Tracking of the upper human body is one of the most interesting and challenging research fields in computer vision and comprises an important component used in gesture recognition applications. In this work, a probabilistic approach towards arm and hand tracking is prefsented. We propose the use of a kinematics model together with a segmentation of the parameter space to cope with the space dimensionality problem. Moreover, the combination of particle filters with hidden Markov models enables the simultaneous tracking of several hypotheses for the body orientation and the configuration of each of the arms.

    Experiment 1
    Experiment 2 Experiment 3


  • Hand/Face tracking through propagation of pixel hypotheses

  • The proposed approach difers significantly from existing ones on important aspects of the representation of the location and the shape of tracked objects and of the uncertainty associated with them. The location and the speed of each object is modeled as a discrete time, linear dynamical system which is tracked using Kalman filtering. Information about the spatial distribution of the pixels of each tracked object is passed on from frame to frame by propagating a set of pixel hypotheses, uniformly sampled from the original object's projection to the target frame using the object's current dynamics, as estimated by the Kalman filter. The density of the propagated pixel hypotheses provides a novel metric that is used to associate image pixels with existing object tracks by taking into account both the shape of each object and the uncertainty associated with its track. The proposed tracking approach has been developed to support face and hand tracking for human-robot interaction. Nevertheless, it is readily applicable to a much broader class of multiple objects tracking problems.

    Office experiment Human-Robot interaction
    Uncompressed
    Video (14.5 MB)


    Compressed
    Video (1.1 MB)
    Uncompressed
    Video (21.8 MB)


    Compressed
    Video (1.6 MB)
    Initial sequence Skin-color probabilities
    Uncompressed
    Video (5.7 MB)


    Compressed
    Video (1.2 MB)
    Uncompressed
    Video (17.5 MB)


    Compressed
    Video (1.7 MB)
    Foreground pixels Skin-colored blobs
    Uncompressed
    Video (7.7 MB)


    Compressed
    Video (1.5 MB)
    Uncompressed
    Video (33.0 MB)


    Compressed
    Video (7.7 MB)
    Predicted state Predicted state
    Uncompressed
    Video (13.0 MB)


    Compressed
    Video (2.2 MB)
    Uncompressed
    Video (22.3 MB)


    Compressed
    Video (2.8 MB)
    Updated state Updated state


  • Distinguishing between hands and faces - Hand Gesture Recognition

  • Our hand and face tracker (described above) provides a set of blob hypotheses that correspond to the location of hands and faces of people that are in front of the robot. To proceed with higher-level tasks like hand gesture recognition, one has to distinguish between hypotheses that belong to hands and hypotheses that belong to faces. Moreover, for hand hypotheses, one has to know which hypotheses belong to left hands and which hypotheses belong to right hands.

    Towards this goal, we have developed a technique that incrementally classifes a hypothesis into one of three classes: faces, left hands and right hands. The incremental classifier provides a way to compute a belief about the class of each hypothesis based on a set of featuresthat contain information about the hypothesis shape, location and speed. For each new frame, the belief is incrementaly updated based on the belief of the previous frame and the current observations.

    Experiment 1: Simulated Bar Environment
    Uncompressed
    Video (3.4 MB)




    Experiment 2 Experiment 3
    Uncompressed
    Video (8.8 MB)


    Compressed
    Video (1.4 MB)
    Uncompressed
    Video (5.7 MB)


    Compressed
    Video (1.4 MB)


    Relevant Publications

    H. Baltzakis, M. Pateraki, P. Trahanias, "Visual tracking of hands, faces and facial features of multiple persons.", Machine Vision and Applications, pp. 1-17, 2012, doi:10.1007/s00138-012-0409-5.

    M. Pateraki, H. Baltzakis, P. Trahanias, "Using Dempster's rule of combination to robustly estimate pointed targets", In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), May 14-18, St. Paul, Minnesota, USA (accepted for publication), 2012.

    M. Sigalas, H. Baltzakis, and P. Trahanias, "Gesture recognition based on arm tracking for human-robot interaction", Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference, pp.5424-5429, 18-22 Oct. 2010

    M. Pateraki, H. Baltzakis, P. Kondaxakis, and P. Thahanias, Tracking of facial features to support human-robot interaction, Proc. of IEEE International Conference on Robotics and Automation (ICRA), Koebe, Japan, May 2009.

    M. Sigalas, H. Baltzakis, and P. Trahanias, Visual tracking of independently moving body and arms, IIn Proc. IEEE/RSJ Intlernational Conference on Intelligent Robotics and Systems (IROS), St. Louis, MO, USA, October 2009.



Autonomous Navigation
  • Using multi-hypothesis mapping to close loops in complex cyclic environments

  • The method consists of two phases. During the first phase, the algorithm creates and tracks a number of possible robot paths along with their corresponding maps. After all data is processed, the algorithm decides which of the robot paths is most probable. During the second phase of the method, an EM procedure is used in order to rectify the robot's path and the corresponding map.

    Castello di Belgioioso, Italy
    Phase A
    Phase B


  • Mobile robot localization using Switching State-Space Modeling

  • In order to carry out complex navigational tasks, an autonomous robotic agent must be able to provide answers to the "Where am I?" question, that is, to localize itself within its environment.
    To reduce the inherent complexity associated with this problem, adoption of appropriate geometric constrains in combination with effective modelling of related information is required. The achieved abstraction, not only makes robotic problems computationally feasible, but also provides robustness in the presence of noise, or other, often unpredictable, factors. Two of the most successful probabilistic models proposed for this purpose in the past are generally fall into two major categories: Hidden Markov Models (HMM) and Kalman filters. Kalman filter approaches are bettern with respect to computational efficiency, scalability, and accuracy. On the other hand, HMM-based approaches were proved to be more robust in the presence of noise and/or unreliable odometry information.
    To combine the advantages from both of the above-mentioned approaches, we have proposed a probabilistic framework for modelling the robot's state and sensory information, based on Switching State-Space Models. A central concept in our framework is to let HMM models handle the qualitative aspects of the problem, i.e. perform coarse localization, and Kalman filters the metric aspects, that is, elaborate on the previous result and provide accurate localization. Discrete HMM update equations are used in order to update the probabilities assigned to a fixed, small number of discrete states, while Kalman filter based trackers, operating within each discrete state, are used in order to provide accurate metric information.
    High-level features, consisting of sequences of line segments and corner points, extracted robustly from laser range data, are used to facilitate the implementation of the model, while, a fast dynamic programming algorithm is used in order to produce and score matches between them and an a-priori map. Experimental results have shown the applicability of the algorithm for indoor navigation tasks where the global localization capabilities of the HMM approaches and efficiency and accuracy of the Kalman filter based approaches are required at the same time.

    Experiment 1
    Artificial environment
    Experiment 2
    Real environment 1
    Real environment 2


    Relevant Publications

    A. Foka, P. Trahanias, "Probabilistic Autonomous Robot Navigation in Dynamic Environments with Human Motion Prediction", I.J Social Robotics 2(1): 79-94 (2010).

    A. Foka, P. Trahanias, "Real-time hierarchical POMDPs for autonomous robot navigation", Robotics and Autonomous Systems 55 (2007) 561–571

    H. Baltzakis, P. Trahanias, Using Multi-hypothesis Mapping to Close Loops in Complex Cyclic Environments, in proc. IEEE international conference on robotics and automation (ICRA06), 15-19 May 2006, Orlando, Florida, USA.

    H. Baltzakis, P. Trahanias, A Hybrid Framework for Mobile Robot Localization: Formulation Using Switching State-Space Models, Autonomous Robots, 15(2):169-191, September 2003.

    H. Baltzakis, P. Trahanias, Closing multiple loops while mapping features in cyclic environments, In Proc. IEEE/RSJ Intlernational Conference on Intelligent Robotics and Systems (IROS), pages 717-723, Las Vegas, USA, October 2003.

    H. Baltzakis, P. Trahanias, An Iterative Approach for Building Feature Maps in Cyclic Environments, IROS 2002,IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, pp.576-581, Lausanne, Switzerland, Sep.30-Oct.4, 2002.



Brain Modeling / Time Perception

Robotic Demos
  • Robot MUFIK at the Natural History Museum of Crete

  • Since 18-Jan-2010, MUFIK, an autonomously navigating robot, has been installed at the premises of the Natural History Museum of Crete at Heraklion. This robot runs as a permanent installation at the museum, interacting with visitors via the touch screen and a simple simulated face, autonomously guiding them at the foyer of the museum and showing them around.
    This robot runs completely unattended and it is operated on a daily basis by the personnel of the museum. In the next few months (i.e. within the spring of 2010) another, identical robot will be installed at different section of the same museum, intended to offer guided tours to the exhibits of this particular section.



  • INDIGO project: Evaluation sessions at FHW

  • The INDIGO project addressed the development of Human-robot interaction technology. It involved multi-modal, bi-directional interaction. Moreover, it employed a mechanical head capable of mimicking human facial expressions, and supporting naturalistic spoken conversation. The head was embodied on a mobile robot empowered with advanced autonomous navigation skills. The overall system was able to act according to motion patterns that are familiar to humans.
    Advanced natural dialogue capabilities facilitated the overall goal of human-robot interaction. Natural dialogue involved input and output from various modalities, such as spoken natural language, gestures, emotions, and facial expressions. While the emphasis was on technologies that allow robots to generate natural descriptions of their physical surroundings, INDIGO also addressed interpretation of a relatively broad range of input.
    Emphasis was given in the creation of appropriate user models for humans interacting with a robot as well as for the robot itself. User models were used to drive the dialogue management system and, thus, to allow adaptation in the behavior of the robot according to the perceived user profile as well as the knowledge, personality and gathered experience of the robot itself.
    INDIGO was demonstrated by deploying a prototype system at the Hellenic Cosmos, a Cultural Centre located in Athens. The prototype operated autonomously, interacting with humans inexperienced in robots, offering them the possibility to engage with advanced robotics technologies. Three extensive evaluation sessions took place in the period June-December 2009.



  • 73rd International fair of Thessaloniki

  • From September 5th to September 14th 2008, a prototype system was exhibited at the international fair of Thessaloniki, which is the most prestigious trade fair held in Greece . During the event, various software modules including autonomous navigation, people tracking, and vision modules were demonstrated. A simplified dialogue management system was also installed offering human-robot-interaction capabilities to the visitors of the exhibition. Input from the users was mostly given through the touch screen interface. The exhibited prototype was able to operate for more than 10 hours each day without any significant problems, being a constant attractor of visitors - mostly children - that wanted to "play" with it.



    Relevant Publications

    P. Trahanias, W. Burgard, A.A. Argyros, D. Haehnel, H. Baltzakis, P. Pfaff, C. Stachniss, Tourbot and WebFair: Web Operated Mobile Robots for Telepresence in Populated Exhibitions, IEEE Robotics and Automation Magazine, Special issue on EU-funded projects in Robotics, vol. 12, no. 2, pp. 77-89, June 2005.

    W. Burgard, P. Trahanias, D. Hahnel, M. Moors, D. Schulz, H. Baltzakis, A.A. Argyros, Tele-presence in Populated Exhibitions through Web-operated Mobile Robots, journal of Autonomous Robots, Kluwer Academic Publishers, 15, 299-316, 2003

    P.E. Trahanias, W. Burgard, D. Haehnel, M. Moors, D. Schulz, H. Baltzakis and A. Argyros, Interactive Tele-Presence in Exhibitions through Web-operated Robots, in proceedings of the 11th International Conference on Advanced Robotics (ICAR03), invited session on Robotics and Art, pp. 1253-1258, University of Coimbra, Coimbra, Portugal, June 30 - July 3, 2003.





back