New MOVING lecture video fragmentation technologies in VideoLectures.NET platform
New MOVING lecture video fragmentation technologies in VideoLectures.NET platform

The developer of the VideoLectures.NET portal, JSI, is part of the MOVING consortium, which has been working on developing new and more effective methods for lecture video fragmentation and fragment-level annotation, to allow for fine-grained access to lecture video collections. In the latest MOVING method, developed by CERTH, automatically-generated speech transcripts of the lecture video are analysed with the help of word embeddings that are generated from pre-trained state-of-the-art neural networks. This lecture video fragmentation method is part of the MOVING platform, and its results are also being ingested in the VideoLectures.NET portal, making it possible for the users of both platforms to access and view specific fragments of lecture videos that cater to their information needs.

For now, the fragments are accessible only for some lectures in VideoLectures.NET (testing phase); see for instance  “Recurrent Neural Networks (RNNs)“. The fragments are presented as “chapters” to the right of the video player window, and can serve as a tool to find particular video parts easier and faster. Lectures at VideoLectures.NET are mostly from half an hour to an hour long, but if a user is interested only in a specific part of a lecture, the enabled fragments (chapters) will ease finding and consuming the desired parts of the learning materials. In the future, the fragments will be added to all lectures. To illustrate how the fragment-level information can be accessed and used, please watch the following demo: http://videolectures.net/moving_platform_VLNchapters/

For technical details on the lecture fragmentation method, and for accessing a new artificially-generated dataset of synthetic video lecture transcripts that has been created and released by MOVING, based on real VideoLectures.NET data, see [1].

[1] D. Galanopoulos, V. Mezaris, “Temporal Lecture Video Fragmentation using Word Embeddings”, Proc. 25th Int. Conf. on Multimedia Modeling (MMM2019), Thessaloniki, Greece, Springer LNCS vol. 11296, pp. 254-265, Jan. 2019. DOI: https://doi.org/10.1007/978-3-030-05716-9_21

Dataset available at https://github.com/bmezaris/lecture_video_fragmentation