Third DIHARD Challenge Development

Full Official Name: Third DIHARD Challenge Development
Submission date: Nov. 11, 2022, 9:29 p.m.

Introduction: Third DIHARD Challenge Development was developed by Linguistic Data Consortium (LDC) and contains approximately 34 hours of English and Chinese speech data along with corresponding annotations used in support of the Third DIHARD Challenge. The DIHARD Challenges are a set of shared tasks on diarization focusing on "hard" diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As with the first and second challenges, the third development and evaluation sets were drawn from a diverse sampling of sources including monologues, map task dialogues, broadcast interviews, sociolinguistic interviews, meeting speech, speech in restaurants, clinical recordings, and amateur web videos. Data: Data sources in this release are as follows (all sources are in English unless otherwise indicated): Autism Diagnosis Observation Schedule (ADOS) interviews Conversations in Restaurants DCIEM/HCRC map task (LDC96S38) Conversational telephone speech collected by LDC for the Fisher project Audiobook recordings from LibriVox Meeting speech from 2004 Spring NIST Rich Transcription (RT-04S) Development (LDC2007S11) and Evaluation (LDC2007S12) releases 2001 U.S. Supreme Court oral arguments Sociolinguistic interviews from SLX Corpus of Classic Sociolinguistic Interviews (LDC2003T15) Mixer 6 Speech (LDC2013S03) English and Chinese video collected by LDC as part of the Video Annotation for Speech Technologies (VAST) project YouthPoint radio interviews All audio is provided in the form of 16 kHz, 16-bit, mono-channel FLAC files. The diarization for each recording is stored as a NIST Rich Transcription Time Marked (RTTM) file. RTTM files are space-separated text files containing one turn per line. Segmentation files are stored as HTK label files. Each of these files contains one speech segment per line. Scoring regions for each recording are specific by un-partitioned evaluation map (UEM) files. All annotation file types are encoded as UTF-8. More information about the file formats and data sources and domains are in the included documentation. Samples: Please view these samples: Audio Sample (FLAC) Label Sample (TXT) RTTM Sample (TXT) Updates: Additional information, updates, bug fixes may be available in the LDC catalog entry for this corpus at LDC2022S12.

Right Holder(s)