VITALAS at TRECVID-2009

Christos Diou, George Stephanopoulos, Nikos Dimitriou, Panagiotis Panagiotopoulos, Christos Papachristou, Anastasios Delopoulos, Henning Rode, Theodora Tsikrika, Arjen P. de Vries, Daniel Schneider, Jochen Schwenninger, Marie Luce Viaud, Agnès Saulnier, Peter Altendorf, Birgit Schröter, Matthias Elser, Angel Rego, Alex Rodriguez, Cristina Martínez, Iñaki EtxanizGérard Dupont, Bruno Grilhères, Nicolas Martin, Nozha Boujemaa, Alexis Joly, Raffi Enficiaud, Anne Verroust, Souheil Selmi, Mondher Khadhraoui

Research output: Contribution to conferencePaperpeer-review

5 Citations (Scopus)

Abstract

This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks. For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual, audio and text) and the results show that use of such features improves the retrieval effectiveness significantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the "bag-of-words" approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and effectiveness of the system's search function-alities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the effectiveness of the different search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to five times as many relevant shots, indicating the benefits of the use of robust concept detectors in multimodal video retrieval.

Original languageEnglish
Publication statusPublished - 2009
EventTREC Video Retrieval Evaluation, TRECVID 2009 - Gaithersburg, MD, United States
Duration: 16 Nov 200917 Nov 2009

Conference

ConferenceTREC Video Retrieval Evaluation, TRECVID 2009
Country/TerritoryUnited States
CityGaithersburg, MD
Period16/11/0917/11/09

Fingerprint

Dive into the research topics of 'VITALAS at TRECVID-2009'. Together they form a unique fingerprint.

Cite this