In this paper, we investigate multi-stream acoustic modelling using the raw real and imaginary parts of the Fourier transform of speech signals. Using the raw magnitude spectrum, or features derived from it, as a proxy for the real and imaginary parts leads to irreversible information loss and suboptimal information fusion. We discuss and quantify the importance of such information in terms of speech quality and intelligibility. In the proposed framework, the real and imaginary parts are treated as two streams of information, pre-processed via separate convolutional networks, and then combined at an optimal level of abstraction, followed by further post-processing via recurrent and fully-connected layers. The optimal level of information fusion in various architectures, training dynamics in terms of cross-entropy loss, frame classification accuracy and WER as well as the shape and properties of the filters learned in the first convolutional layer of single- and multi-stream models are analysed. We investigated the effectiveness of the proposed systems in various tasks: TIMIT/NTIMIT (phone recognition), Aurora-4 (noise robustness), WSJ (read speech), AMI (meeting) and TORGO (dysarthric speech). Across all tasks we achieved competitive performance: in Aurora-4, down to 4.6% WER on average, in WSJ down to 4.6% and 6.2% WERs for Eval-92 and Eval-93, for Dev/Eval sets of the AMI-IHM down to 23.3%/23.8% WERs and in the AMI-SDM down to 43.7%/47.6% WERs have been achieved. In TORGO, for dysarthric and typical speech we achieved down to 31.7% and 10.2% WERs, respectively.
|Number of pages
|IEEE/ACM Transactions on Audio, Speech, and Language Processing
|Accepted/In press - 27 Dec 2022