Brief description

We present a novel, non-intrusive approach for estimating contact forces during hand-object interactions relying solely on visual input provided by a single RGB-D camera. We consider a manipulated object with known geometrical and physical properties. First, we rely on model-based visual tracking to estimate the object's pose together with that of the hand manipulating it throughout the motion. Following this, we compute the object's first and second order kinematics using a new class of numerical differentiation operators. The estimated kinematics is then instantly fed into a second-order cone program that returns a minimal force distribution explaining the observed motion. However, humans typically apply more forces than mechanically required when manipulating objects. Thus, we complete our estimation method by learning these excessive forces and their distribution among the fingers in contact. We provide a full validity analysis of the proposed method by evaluating it based on ground truth data from additional sensors such as accelerometers, gyroscopes and pressure sensors. Experimental results show that force sensing from vision (FSV) is indeed feasible.

Visit also:

Sample results

Video with experimental results


  • T.-H. Pham, A. Kheddar, A. Qammaz, A.A. Argyros
  • This work has been supported by the EU project ROBOHOW.

Relevant publications

  • T.-H. Pham, A. Kheddar, A. Qammaz, A. A. Argyros, "Towards Force Sensing from Vision: Observing Hand-Object Interactions to Infer Manipulation Forces", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), Boston, Massachusetts, June 7-12, 2015.

The electronic versions of the above publications can be downloaded from my publications page.