Skip to main content
Presentation

Asr Dictation Program Accuracy: Have Current Programs Improved?

Authors
  • Shannon McCrocklin orcid logo (Southern Illinois University)
  • Abdulsamad Humaidan (Southern Illinois University)
  • Idée Edalatishams (Iowa State University)

Abstract

Automatic Speech Recognition (ASR) dictation programs have the potential to help language learners get feedback on their pronunciation by providing a written transcript of recognized speech. Early research into dictation programs showed low rates of recognition for non-native speech that prevented usable feedback (Coniam, 1999; Derwing, Munro, & Carbonaro, 2000), but updated research revisiting the accuracy of dictation transcripts for non-native speech is needed. This study investigates current accuracy rates for two programs, Windows Speech Recognition (WSR) and Google Voice Typing (Google). Participants (10 native English speakers and 20 advanced non-native speakers) read 60 sentences and responded to two open-ended questions. Transcripts were analyzed for accuracy and t-tests were used to make comparisons between programs. Major findings include: 1) Google displayed a tendency to turn off in the middle of transcription, which affected rates of attempted words; 2) when comparing the accuracy for native versus non- native speech, both programs had higher levels of accuracy for native speech; and 3) when comparing programs for the same speaker, Google outperformed WSR for both speaker groups on both tasks. Comparing the results to Derwing et al. (2000), Google seems to offer substantial increases in accuracy for non-native speakers.

How to Cite:

McCrocklin, S., Humaidan, A. & Edalatishams, I., (2018) “Asr Dictation Program Accuracy: Have Current Programs Improved?”, Pronunciation in Second Language Learning and Teaching Proceedings 10(1).

Downloads:
Download PDF
View PDF

Published on
2019-01-01

License