Successfully participation in the AVS and MED tasks of TRECVID 2016!
Successfully participation in the AVS and MED tasks of TRECVID 2016!

MOVING, via its consortium member CERTH, successfully participated in the Ad-hoc Video Search (AVS) and event-based annotation (MED) tasks of TRECVID 2016. The AVS task attempts to model the end-user video search use-case, where the user is looking for segments of video containing persons, objects, activities, locations, etc. and combinations of the former. The experiments this year were performed on a set of Internet Archive videos totaling about 600 hours of video duration, and using 30 different queries. Our fully automatic runs performed very well in this challenging task, compared to the runs of the other participating institutions from all over the world. Specifically, our best run was ranked 2nd-best, achieving an inferred average precision of 0.051 (compared to 0.054 reached by the best-performing participant in the fully-automatic category, and 0.040 reached by the 3rd best-performing one). Interestingly, our fully automatic runs also compared favorably to the manually-assisted runs that were submitted to AVS: with an inferred average precision of 0.051, our best fully automatic run also outperformed the runs of all but one participant in the manually-assisted run category. We also had very good results in the event-based annotation task (MED), where we tested our machine learning techniques for video annotation, using a variable number of training samples. Our participation in the AVS and MED tasks this year was jointly supported by MOVING and by another H2020 EU project, InVID.