

For those of you who like a quick answer—not necessarily. The reasons that ETS provides TOEIC Speaking tests by computer as well as the cost-intensive methods that they've chosen for scoring the test go beyond a business decision and are both fascinating and commonly misunderstood.
For a long time, there was no TOEIC Speaking test. While other agencies offered speaking tests via interviewer, ETS was concerned not only about the time- and labor-intensive nature of such testing for large groups, but also about interviewer subjectivity. No matter how well-trained interviewers are (and for many tests the training is minimal), they can still be influenced by unconscious biases related to test taker accent, appearance, or other factors unrelated to actual speaking skills. Or they may simply have a bad day (or good day) when they veer off course and become unusually harsh (or lenient) with the interviewees they encounter.
Computer technology has now advanced to the point that testing agencies can be confident using it to administer tests. I've been asked about distortion—nope. Today's audio, recording, and transmission technologies allow voices to come across with great clarity (hence the boom in Internet phone service business). And with computer delivery of a test you are left with a recording to refer to in case of test taker challenges or other concerns.
In the case of scoring, however, computer technology has not yet been perfected. ETS research has shown that the technology works pretty well for some types of test questions (simply reading out a passage or responding to closed-ended questions).
However, computer scoring is not yet at a satisfactory level when it comes to the types of open-ended questions so often encountered in real-life situations. If test takers have been coached to use key words, for instance, they can respond to a question without actually understanding it or coming up with an appropriate answer, and still score much higher than they deserve.
Therefore while TOEIC Speaking tests are administered by computers, they are scored by human raters. I am not aware of any other business English test that is handled this way. Other such tests, if administered by computer, are also scored by computer, a business decision that saves significant costs but at least to the eyes of ETS, a very research-driven agency, compromises in terms of quality.
Aren't human scorers going to be biased just as human interviewers are? They could be but ETS goes to extraordinary lengths to eliminate that concern.
Obviously this method is much more time- and labor-intensive than running test taker responses through a computer, which then spits out a score in seconds. It is comparatively costly too, but it is more accurate and objective. ETS has detemined that the quality gained is worth more than cost savings.
—Lia Nigro, TOEIC USA Team
Comments
Post new comment