In this work and on-line acoustic compensation technique for robust speech recognition is introduced. The proposed technique applies in the acoustic feature domain in order to reduce the acoustic mismatch between training and actual conditions. Acoustic observation vectors, delivered by the acoustic front-end, are mapped into a reference acoustic space while statistics for performing acoustic mapping are collected exploiting past input data. The technique is independent of the particular speech recognizer used. A set of experiments concerning speaker and environment adaptation was carried out. Results show that the proposed technique tangibly improves the performance of a speaker independent speech recognizer based on hidden Markov models trained with clean speech
An On-line Technique for Speaker and Environment Adaptation
Giuliani, Diego
1999-01-01
Abstract
In this work and on-line acoustic compensation technique for robust speech recognition is introduced. The proposed technique applies in the acoustic feature domain in order to reduce the acoustic mismatch between training and actual conditions. Acoustic observation vectors, delivered by the acoustic front-end, are mapped into a reference acoustic space while statistics for performing acoustic mapping are collected exploiting past input data. The technique is independent of the particular speech recognizer used. A set of experiments concerning speaker and environment adaptation was carried out. Results show that the proposed technique tangibly improves the performance of a speaker independent speech recognizer based on hidden Markov models trained with clean speechI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.