Unsupervised and Explainable Assessment of Video Similarity



Brief description

We propose a novel unsupervised method that assesses the similarity of two videos on the basis of the estimated relatedness of the objects and their behavior, and provides arguments supporting this assessment. A video is represented as a complete undirected action graph that encapsulates information on the types of objects and the way they (inter)act. The similarity of a pair of videos is estimated based on the bipartite Graph Edit Distance (GED) of the corresponding action graphs. As a consequence, on-top of estimating a quantitative measure of video similarity, our method establishes spatiotemporal correspondences between objects across videos if these objects are semantically related, if/when they interact similarly, or both. We consider this an important step towards explainable assessment of video and action similarity. The proposed method is evaluated on a publicly available dataset on the tasks of activity classification and ranking and is shown to compare favorably to state of the art supervised learning methods.


Sample results

Video with description and experimental results


Main web page

Check the web page of Kostas Papoutsakis.


Contributors


Relevant publications

  • K. Papoutsakis, A.A. Argyros, “Unsupervised and Explainable Assessment of Video Similarity”, British Machine VIsion Conference (BMVC 2019), Cardiff, UK, September, 2019.

The electronic versions of the above publications can be downloaded from my publications page.