Speech Acoustic Modelling using Raw Source and Filter Components

Erfan Loweimi, Zoran Cvetkovic, Peter Bell, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review


The use of semi-supervised training (SST) has become an increasingly popular way of increasing the performance of ASR acoustic models without the need for further transcribed speech data. However, the performance of the technique can be very sensitive to the quality of the initial ASR system. This paper undertakes a comprehensive study of the improvements gained with respect to variation in the initial systems, the quantity of untranscribed data used, and the learning schedules. We postulate that the reason SST can be effective even when the initial model is poor is because it enables utterance-level information to be propagated to the frame level, and hence hypothesise that the quality of the language model plays a much larger role than the quality of the acoustic model. In experiments on Tagalog data from the IARPA MATERIAL programme, we find that indeed this is the case, and show that with an appropriately chosen recipe it is possible to achieve over 50% relative WER reductions from SST, even when the WER of the initial system is more than 80%.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2021
Number of pages5
Publication statusPublished - 30 Aug 2021

Cite this