ETAPE Evaluation Package

Full Official Name: ETAPE Evaluation Package
Submission date: Feb. 23, 2017, 5:14 p.m.

The ETAPE project (Evaluation en Traitement Automatique de la Parole) consists in an evaluation campaign for automatic speech processing systems. The project was funded by the French National Research Agency (ANR) under grant agreement ANR-09-CORD-009. The ETAPE 2011 campaign follows the series of ESTER campaigns organized in 2003, 2005 and 2009 (see also ELRA-E0021, ELRA-S0241, ELRA-S0305 and ELRA-S0338 for resources from ESTER campaigns), targeting a wider variety of speech quality and the more difficult challenge of spontaneous speech. While the initial ESTER campaigns targeted radio broadcast news, the 2009 edition introduced accented speech and non news shows with spontaneous speech. The ETAPE 2011 evaluation focuses on TV material with various levels of spontaneous speech and multiple speaker speech. Apart from spontaneous speech, one of the originality of the ETAPE 2011 campaign is that it does not target any particular type of shows such as news, thus fostering the development of general purpose transcription systems for professional quality multimedia material. As in the past, several tasks were evaluated independently on the same dataset. Four tasks were considered in the ETAPE 2011 benchmark. For historical reasons, tasks belong to one of the following three categories: segmentation, transcription and information extraction. The multiple-speaker detection task was implemented as an exploratory task given the lack of background. The ETAPE 2011 data consists of ca. 30 hours of French radio and TV data, selected to include mostly non planned speech and a reasonable proportion of multiple speaker data. All data were carefully transcribed, including named entity annotation. In the scope of the ETAPE ANR project, phonetic alignments and syntactic trees enrich part of the ETAPE data set. This package includes the material that was used for the ETAPE evaluation campaign. It includes resources, scoring tools, results of the campaign, etc., that were used or produced during the campaign. The aim of this evaluation package is to enable external players to evaluate their own system and compare their results with those obtained during the campaign itself.

Creator(s)
Distributor(s)
Right Holder(s)