Skip to main content
Presentation

Language Proficiency Ratings: Human vs. Machine

Authors
  • David O. Johnson (Northern Arizona University)
  • Okim Kang (Northern Arizona University)
  • Romy Ghanem (Northern Arizona University)

Abstract

This paper explains a computer model that mechanically assesses the verbal proficiency of audio recordings of unconstrained non-native English speech. The computer model utilizes machine learning and eleven suprasegmental measures split into four categories (stress, pitch, pause, and temporal) to compute the proficiency levels. In an experiment with 120 non-native English speaker’s monologs from the speaking section of the Cambridge ESOL General English Examinations, the Pearson’s correlation comparing the certified Cambridge English Language Assessment proficiency scores and the computer’s computed proficiency scores was 0.718. This human-computer correlation is greater than that of other related computer programs (0.55-0.66) and is nearing that of human examiners (0.70-0.77) with regards to inter-rater reliability.

How to Cite:

Johnson, D. O., Kang, O. & Ghanem, R., (2015) “Language Proficiency Ratings: Human vs. Machine”, Pronunciation in Second Language Learning and Teaching Proceedings 7(1).

Downloads:
Download PDF
View PDF

Published on
2015-12-31

License