Language and Speech Laboratory

An audio-visual corpus for speech perception and automatic speech recognition

Martin Cooke, Jon Barker, Stuart Cunningham, Xu Shao.

Journal of the Acoustical Society of America      volume:120:2421-2424.

An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as "place green at B 4 now." Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.