HAVIC MED Novel 2 Test -- Videos, Metadata and Annotation

Full Official Name: HAVIC MED Novel 2 Test -- Videos, Metadata and Annotation
Submission date: Aug. 12, 2022, 4:44 p.m.

HAVIC MED Novel 2 Test -- Videos, Metadata and Annotation was developed by the Linguistic Data Consortium (LDC) and is comprised of approximately 6,200 hours of user-generated videos with annotation and metadata. To advance multimodal event detection and related technologies, LDC developed, in collaboration with NIST (the National Institute of Standards and Technology), a large, heterogeneous, annotated multimodal corpus for HAVIC (the Heterogeneous Audio Visual Internet Collection) that was used in the NIST-sponsored MED (Multimedia Event Detection) task for several years. HAVIC MED Novel 2 Test is a subset of that corpus, specifically, a collection of videos, metadata and annotation for the HAVIC project originally released to support the 2015 Multimedia Event Detection tasks. Data The data consists of videos of various events (event videos) and videos completely unrelated to events (background videos) harvested by a large team of human annotators. Each event video was manually annotated with a set of judgments describing its event properties and other salient features. Background videos were labeled with topic and genre categories. All video files are in .mp4 format (h.264), with varying bit-rates and levels of audio fidelity and video resolution. Metadata and annotation for the videos are stored in a .tsv file.

Right Holder(s)