In an interconnected world where English has become the lingua franca of culture, entertainment, business, and academia, the growing demand for learning English as a second language (L2) has led to an increasing interest in automatic approaches for assessing spoken language proficiency. In this regard, mastering grammar is one of the key elements of L2 proficiency. In this paper, we illustrate an approach to L2 proficiency assessment and feedback based on grammatical features using only publicly available data for training and a small proprietary dataset for testing. Specifically, we implement it in a cascaded fashion, starting from learners’ utterances, investigating disfluency detection, exploring spoken grammatical error correction (GEC), and finally using grammatical features extracted with the spoken GEC module for proficiency assessment. We compare this grading system to a BERT-based grader and find that the two systems have similar performances when using manual transcriptions, but their combinations bring significant improvements to the assessment performance and enhance validity and explainability. Instead, when using automatic transcriptions, the GEC-based grader obtains better results than the BERT-based grader. The results obtained are discussed and evaluated with appropriate metrics across the proposed pipeline.

Back to grammar: Using grammatical error correction to automatically assess L2 speaking proficiency

Bannò, Stefano;Matassoni, Marco
2024-01-01

Abstract

In an interconnected world where English has become the lingua franca of culture, entertainment, business, and academia, the growing demand for learning English as a second language (L2) has led to an increasing interest in automatic approaches for assessing spoken language proficiency. In this regard, mastering grammar is one of the key elements of L2 proficiency. In this paper, we illustrate an approach to L2 proficiency assessment and feedback based on grammatical features using only publicly available data for training and a small proprietary dataset for testing. Specifically, we implement it in a cascaded fashion, starting from learners’ utterances, investigating disfluency detection, exploring spoken grammatical error correction (GEC), and finally using grammatical features extracted with the spoken GEC module for proficiency assessment. We compare this grading system to a BERT-based grader and find that the two systems have similar performances when using manual transcriptions, but their combinations bring significant improvements to the assessment performance and enhance validity and explainability. Instead, when using automatic transcriptions, the GEC-based grader obtains better results than the BERT-based grader. The results obtained are discussed and evaluated with appropriate metrics across the proposed pipeline.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/346967
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact