Computers now marking free text responses better than humans
Sally Jordan gave a workshop today on how to use the Intelligent Assessment Technologies system we’ve got plugged into our VLE to develop short text response questions. One example she gave (I’ve reworded it slightly):
A raindrop falls vertically with constant speed. What can you tell from this about the forces acting on the raindrop?
The answer, which I could just about recall from my Higher Physics, is that they’re equal and opposite. You can enter this in all sorts of ways, with misspelling, synonyms and a variety of grammatical structures. Merely enter ‘equal’ and you’ll be given another chance with appropriate feedback saying you’ve only got it partially correct. Sally’s trials show that students are marked accurately by the computer 97% of the time.
In a study to ascertain the effectiveness of this technology responses from students on our introductory science course (S103) to seven questions were marked by the Intelligent Assessment system and six tutors. There were variations in the marking of four of these questions among the tutors, some of whom disagreed with the question author as to the correct response.
There was no surprise that the computer could mark more consistently than the tutors overall and more in line with the question author. What was surprising was the number of misunderstandings, slips and inconsistencies which occurred with the human markers.
Exclusively online assessment in higher education and these kinds of closed free-text questions aren’t set to replace us humble human markers any time soon. They take a lot of time and expertise to write and are only applicable for assessing certain types of learning where questions invite a relatively small range of possible answers. But they have their place as part of an overall assessment strategy and the study concludes that Intelligent Assessment is robust enough for low-stakes summative use. The system will be deployed for all students on our largest science course shortly.