Skip to main content
Presentation

Analyzing Item Bias to Validate and Revise an Ita Performance Test

Authors
  • Dale T. Griffee (Texas Tech University)
  • Jeremy Gevara (Texas Tech University)

Abstract

Classroom teachers sometimes have an aversion to testing because they see tests as a device to fail students rather than teach them. However, when teachers are involved in a program with high stakes results, the test need to be as fair as possible. Estimating item bias is one way to evaluate a test to make it a more equitable decision-making instrument. Using the SOAC program evaluation model, this paper reports a test instrument validation study. The purpose of this study was to determine item bias on International Teaching Assistant (ITA) Performance Test version 8.3, a test designed to evaluate speech fluency and pronunciation in simulated teaching situations (Gorsuch, Meyers, Pickering, & Griffee, 2010). Using Multiple Analysis of Variance (MANOVA), we examined scores from the ten test criteria from passing and failing groups. Results showed no statistically significant difference for criterion four (ITA uses grammatical structures, word choice, and transitional phrases effectively to provide cohesion to the content) and criterion nine (ITA candidate uses visuals or multimedia effectively). Results for the other eight criteria, however, operate effectively, showing a statistical significance between the two groups.

How to Cite:

Griffee, D. T. & Gevara, J., (2011) “Analyzing Item Bias to Validate and Revise an Ita Performance Test”, Pronunciation in Second Language Learning and Teaching Proceedings 3(1).

Downloads:
Download PDF
View PDF

Published on
2011-12-31

License