Occlusion-tolerant and personalized 3D human pose estimation in RGB images
See video results
ReActNet: Temporal Localization of Repetitive Activitiesin Real-World Videos
Unsupervised and Explainable Assessment of Video Similarity
MocapNET: Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images
3D Hand Tracking by Employing Probabilistic Principal Component Analysis to Model Action Priors
Accurate hand keypoint localization on mobile Devices
Robust 3D Human Pose Estimation Guided by Filtered Subsets of Body Keypoints
Joint 3D tracking of a deformable object in interaction with a hand
A graph-based approach for detecting common actions in motion capture data and videos
Using a single RGB frame for real time 3D hand pose estimation in the wild
Online segmentation and classification of modeled actions performed in the context of unmodeled ones
A Hybrid Method for 3D Pose Estimation of Personalized Human Body Models
Back to RGB: 3D tracking of hands and hand-object interactions based on short-baseline stereo
Generative 3D Hand Tracking with Spatially Constrained Pose Sampling
Temporal Action Co-Segmentation in 3D Motion Capture Data and Videos
FORTH 3D Human Body Tracking
Tracking deformable surfaces that undergo topological changes using an RGB-D camera
Localizing Periodicity in Time Series and Videos
Towards Force Sensing from Vision
Head pose estimation on depth data based on Particle Swarm Optimization
3D Tracking of Human Hands in Interaction with Unknown Objects
Model-based 3D Hand Tracking with on-line Hand Shape Adaptation
Hierarchical Particle Filtering for 3D Hand Tracking
Tracking the articulated motion of the human body with two RGBD cameras
Gesture recognition supporting the interaction of humans with socially assistive robots
Synthesizing novel animations of periodic dances
Evolutionary Quasi-random Search for Hand Articulations Tracking
Scalable 3D Tracking of Multiple Interacting Objects
Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis
Tracking the articulated motion of two strongly interacting hands
Full DOF tracking of a hand interacting with an object by modeling occlusions and physical constraints
Shape from Interaction
Efficient model-based 3D tracking of hand articulations using Kinect
Markerless and efficient 26-DOF
Download video
Binding vision to physics based simulation: The case study of a bouncing ball
Integrating tracking with fine object segmentation
Multiple objects tracking in the presence of long-term occlusions
Scale invariant and deformation tolerant partial shape matching
From multiple views to textured 3D meshes: a GPU-powered approach
2D and 3D tracking of multiple skin colored regions
Finger detection
Vision based human computer interaction
PaperView: augmenting physical surfaces with location-aware digital information
Building a multi-touch display based on computer vision techniques
Multicamera human detection and tracking supporting natural interaction with large-scale displays
3D head pose estimation from multiple distant views
Camera tracking and scene augmentation
Using geometric constraints for matching disparate stereo views of 3D scenes containing planes
Independent 3D motion detection based on the computation of normal flow fields
Construction of perspectively correct images from panoramic views
View transformations for efficient, vision-based, traffic monitoring
Shape matching
Vision-based SLAM and moving objects tracking for the perceptual support of a smart walker platform
See results on youtube
Visual homing for undulatory robotic locomotion
Lumen detection for capsule endoscopy
Biologically inspired reactive robot navigation based on a combination of central and peripheral vision
Bio-mimetic centering behavior for mobile robots equipped with panoramic cameras
Robot homing based on panoramic vision
Localizing unordered panoramic images based on the Levenshtein distance
Angle-based robot navigation
Semi-autonomous navigation of a robotic wheelchair
Autonomous robot navigation with application to robots in museums and exhibitions