Category Archives: eAssessment

  • 1
Interactive Computer Marked Assessment delivery at the OU

Use of e-assessment nearly doubles at Open University

Year on year use of online assessment is nearly doubling here at the OU. In the last year around half a million quizzes were delivered to students in our virtual learning environment using a combination of the Moodle quiz engine and the University’s in-house OpenMark system.

Interactive Computer Marked Assessment delivery at the OU

The use of the e-assessement tools for summative purposes (affecting the final mark for a module) has risen to around 16% of all quizzes delivered.  Meanwhile a new question engine for Moodle has been pioneered by Tim Hunt and Phil Butcher and is scheduled for release this December.  Phil says “the new engine has a crispness and consistency that inspires confidence” and he’s pleased to “wave farewell to many of the inconsistencies of the old engine”.

Enhancements planned over the next year include:

  • Drag and drop of words onto images
  • Drag and drop of images onto images
  • New short answer question using pattern-matching algorithm
  • New question type using drop-down lists
  • New question type to enable placing of markers on an image
  • New numerical question type enabling use of mathematical and scientific notation
  • New question type to enable incorporation of Java applets (including automated marking of diagrams)
  • Audio recording question type
  • New authoring interface
  • Inclusion of STACK maths questions
  • Interface to Learnosity audio recording tool
  • Dragging implemented on touch screen devices e.g. iPad
  • Better import and export from question bank to facilitate off-line authoring

  • 1

HEFCE’s new strategy for e-learning

Category : Adoption , eAssessment , Policies

The Higher Education Funding Council for England has issued a document called Enhancing learning and teaching through the use of technology: A revised approach to HEFCE’s strategy for e-learning. Naturally I was interested to see what they are recommending. Various other studies are quoted which demonstrate the benefits of elearning, and there are references to a few learning technologies:

  • mobile learning and personalisation: learners expect to be able to use their own devices and to personalise institutional services
  • eportfolio (though they don’t call it that): more learners will require a lifelong learning record which provides links to formal qualifications, facilitates reflection and helps to identify learning opportunities
  • eassessment: some of the benefits are listed in the report, though there are no recommendations

Two of these, eportfolios and eassessment are already priorities for JISC which is also prioritising learning resources and activities, technology to support the administration of learning, teaching and assessment (a bit of recursion going on here) and technology-enhanced learning environments. There’s mention too of JISC’s and the Higher Education Academy’s pilots on open educational resources to examine how they can enhance learning and teaching. The Academy’s plans for developing an easily-navigable evidence base are discussed too – this would certainly be useful.

There’s also the age-old argument that the driver should be the enhancement of learning and teaching rather than the technology, and more strongly:

Innovative developments in technology will only be relevant if the enhancement of learning and teaching is the core purpose.

You can agree with this in principle but as I’ve argued before the technical innovations tend to come first and only then are their applications in education made possible.

All that is kind of by way of introduction to a list of strategic priorities and goals or benchmarks which institutions might try to achieve to meet those priorities. One example of a goal is:

Web 2.0 technologies are harnessed to support communities of learning and research

The danger with benchmarks like these of course is that they can turn into mere tickboxing exercises. Just about any university could tick that one but it’s meaningless without a sense of scale, impact and change over time. Nevertheless there’s a useful set of indicators in the policy which we’ll need to look at carefully to see if we can translate them into our institutional context, compare with other elearning benchmarking methodologies and add these dimensions of scale, impact and change.

  • 1

Summative online assessment: disaster and triumph

Category : eAssessment

I’ve just spent two days at the e-Assessment in Practice Conference at the Defence Academy in Shrivenham. I felt a bit out of it not being in military uniform or being followed around by a dog but it’s been a good opportunity to get back into the area of eassessment.


Two of the presentations dealt with issues around large-scale summative eassessment ie big groups of students lined up in rows doing online tests under exam conditions. Bill Warburton and Helena Knowles from Southampton University were running an ambitious summative assessment pilot where students from satellite campuses in places such as Winchester were bused into the Southampton to sit the exam. Bill talked about the military-style planning for the large cohorts of students, testing of the workstations in advance, training of the invigilators with eassessment staff on hand to assist them… The preparation was meticulous for the first assessments.

Then disaster struck.

A flood meant that the data centre was knee-deep in water which took out some of the servers and the networks. Several of the eassessments had to be abandoned. All that planning, that bringing around of the skeptics…. the learning technologist’s nightmare. But Bill still seems to be smiling (perhaps through gritted teeth), still has a job, and Southampton will be continuing to take the pilots further.

At Manchester University, the push came from the University President who was so concerned about the institution’s poor results in the National Student Survey that he saw online assessment as a key way to enhance feedback to students.

Julie Andrews was tasked with coordinating much of the eassessment activity at Manchester and, like Bill at Southampton, found summative assessment an order of magnitude more difficult to organise than the formative assessments she’d been providing to students up till then. There were so many things to organise: seats allocated to students as they walked in, passwords required to view the exam paper, the Respondus LockDown Browser used so that students could only access the test, paper copies of the exam paper and answer sheets available in case of failure…

Some extraordinary statistics back up Julie’s claims that Manchester’s foray into summative eassessment has been exceptionally successful. For Julie, the increased numbers of students in recent years have meant that providing individual feedback on assessments is well nigh impossible – this way the marking and feedback are instantaneous. Most impressively though, there have been massive improvements in exam performance and hugely reduced student dropout. Other factors to do with the course have remained fairly constant so Julie attributes the improvements entirely to the use of eassessment.

And Manchester is used to rain. They had no floods to disrupt the eassessments; just once in four years has Julie had to resort to the paper-based backup procedures.

  • -

Computers now marking free text responses better than humans

Category : eAssessment , OU VLE

Sally Jordan gave a workshop today on how to use the Intelligent Assessment Technologies system we’ve got plugged into our VLE to develop short text response questions. One example she gave (I’ve reworded it slightly):

A raindrop falls vertically with constant speed. What can you tell from this about the forces acting on the raindrop?

The answer, which I could just about recall from my Higher Physics, is that they’re equal and opposite. You can enter this in all sorts of ways, with misspelling, synonyms and a variety of grammatical structures. Merely enter ‘equal’ and you’ll be given another chance with appropriate feedback saying you’ve only got it partially correct. Sally’s trials show that students are marked accurately by the computer 97% of the time.

Sally Jordan

In a study to ascertain the effectiveness of this technology responses from students on our introductory science course (S103) to seven questions were marked by the Intelligent Assessment system and six tutors. There were variations in the marking of four of these questions among the tutors, some of whom disagreed with the question author as to the correct response.

There was no surprise that the computer could mark more consistently than the tutors overall and more in line with the question author. What was surprising was the number of misunderstandings, slips and inconsistencies which occurred with the human markers.

Exclusively online assessment in higher education and these kinds of closed free-text questions aren’t set to replace us humble human markers any time soon. They take a lot of time and expertise to write and are only applicable for assessing certain types of learning where questions invite a relatively small range of possible answers. But they have their place as part of an overall assessment strategy and the study concludes that Intelligent Assessment is robust enough for low-stakes summative use. The system will be deployed for all students on our largest science course shortly.

  • -

Assessing user generated content and collaboration

Category : eAssessment

Cotswolds sceneI’m at an event called Assessment for Open Learning in the Cotswolds for a couple of days. Two particularly interesting presentations this morning showed how online learning technologies are being used to develop collaborative and other skills, and how these are being assessed.

Mark Endean discussed a postgraduate course on Team Engineering (T885) where students are put in teams and given an empty wiki and a synchronous collaboration tool (FlashMeeting). Teamworking and leadership skills need to be demonstrated for accreditation as chartered engineers hence the necessity for demonstrating competence in these areas as part of the course. The students work in teams but are assessed both jointly through the submission of a report, and individually on their reflections of the effectiveness of the team and their own performance. There is a strong emphasis on providing evidence for any assertions they make. They do this by linking to content in the wiki and to extracts from the videoconferences (where individual contributions can be linked to and replayed with ease). Tutors can then access these when assessing whether or not students’ assessments of themselves are accurate.

Mary Kellet described a postgraduate course in education where students will be required to contribute a 1-2 minute audio or video clip to a team wiki, either by recording it or by locating it on the Internet. The best ones would be placed in a repository for use by future learners on the course. She sees various benefits of this approach. One is an attempt to reduce the imbalance in the power relationship between staff and students by giving learners a sense that they’re valued and have something to offer which might benefit teachers as well as peers and future students. Another intention is that the course becomes a more dynamic and ‘living’ entity, continually being enhanced by the contributions of the students, which are then critically evaluated by their peers.

Immediate concerns from the audience included issues of intellectual property rights when putting student contributions into a repository, and also the logistics of maintaining that repository. Both of these examples however demonstrate how we’re beginning to become more sophisticated in our uses of elearning, building learning activities which involve the use of several tools such as videoconferencing, wikis and eportfolios and integrating these into assessment processes.

  • -

Tutors give their verdict on new gradebook

Category : eAssessment , Moodle , OU VLE

Initial feedback on the new Moodle gradebook is promising. Tutors on the Open University course SDK125 “Introducing health sciences: a case study approach” reported:

Gradebook seems very useful, as I now get to view a list of all of my students results, without having to go into each of their StudentHome pages. It was also useful to see their cummulative results.

Overall, this is a useful tool. I imagine that I will only use it after each iCMA to check on overall scores and submissions. I liked the facility to compare my group’s average with the whole cohort.

Gradebook part 1

Gradebook part 2

We had some concerns about the speed but it seems that tutors are prepared to wait up to a minute or so for their groups’ averages to be worked out and displayed.

Hi, the gradebook loaded quickly and is useful.

Seems like the combined efforts of Phil Butcher at the OU and have paid off:

This is a great addition to the website and is going to be an extremely useful tool.

  • -

The e-framework and monolithic VLEs/LMSs

Category : Architecture , eAssessment

The prevailing wisdom from techie types you meet at elearning conferences and in the blogosphere seems to be that VLEs as large applications are unsustainable and that the future is a range of components built by different companies or projects which interact with each other over the Internet (or intranet) via web services as a distributed learning environment. Martin Weller’s posts The VLE/LMS is dead and Some more VLE demise thoughts sum up some of the issues in relation to emerging social software and VLEs. But where does the e-framework, the focus of a large amount of investment from JISC and its equivalents in Australia, New Zealand and the Netherlands fit into this discussion?

The e-framework is an attempt to tackle the interoperability issues and to build the underlying architecture of a distributed virtual learning environment. JISC and others are also funding the development of different applications within the framework. The idea is that if an institution wants for example to change the forum system it’s using, it can just plug in a different one – and the distributed VLE will continue to appear as a single system to the user. A great idea but there are huge logistical issues to overcome if it’s ultimately going to work. The e-framework is also beginning to look like a monolithic model itself where the implication is that institutions will still control the student experience. It’s arguably therefore in conflict with the “use whatever software you can find on the Internet” model.

Despite my ongoing healthy scepticism regarding the viability of this approach I was invited onto the AQuRate, Minibix and AsDel Advisory Group (a mouthful of a name if ever there was one) which met earlier this week in London. This group oversees three projects which are attempting to develop components of an embryonic distributed assessment system using web services within the e-framework. In a nutshell, one system allows you to create questions, one is the storage system for the questions and one presents questions to the student.

This is a radically different model from monolithic eassessment systems such as TOIA and QuestionMark Perception – and from the VLE/LMS where the assessment bit sits alongside forums and everything else in a single application. The three JISC projects are already succeeding at a proof of concept level. They appear to be producing effective technical solutions which work with the QTIv2.1 specification and have a good chance of interoperating effectively with each other through web services by the end of the projects in April 2008.

Having been involved in a range of projects developing eassessment systems with varying levels of success over the years I am keen to see sustainable products – not merely neat technical solutions. The e-framework is an interesting concept and many of its building blocks are now in place but it lacks certain key features of a successful open source community. These tend to be led by charismatic individuals such as Linus Torvalds or Martin Dougiamas who have the skills and personality to harness the efforts of others to enhance the product. Such leaders tend to understand the entire application, insist on optimising the performance of the product at every opportunity, can spot new requirements and ensure they are fulfilled, and are natural leaders (Woods and Guliani, 2005).

Martin Dougiamas at the 2007 UK Moodlemoot
Dougiamas: guru

There’s no such guru to follow for the e-framework but even more fundamentally, the framework is composed of many unmaintained pieces of code written by different individuals using a range of languages and technologies during projects with temporary funding. There is no common purpose which motivates developers and users continually to enhance a system of key importance to themselves or their institutions – as there is for example with Apache or Linux.

I very much hope that what seem to be excellent tools coming out of the AQuRate, Minibix and AsDel projects will have life beyond their project funding. The only example I know of a ubiquitous and apparently sustainable open source educational application so far is Moodle (I’d be delighted to hear of others, though!) SAKAI may prove equally successful and that would be good for Moodle – competition is an antidote to complacency. If these three JISC projects are to be sustainable they could do worse than look at longer-term integration with SAKAI and Moodle.

  • -

Making content interactive

Category : Content , eAssessment , Moodle

I’ve seen elearning projects fail many times over the years because they attempted to take static text designed for print, perhaps with a few graphics, put it on the web and expect students to engage in endless page-turning which they would have been better off doing from a book. It’s boring, the text takes longer to read than from paper, academic staff don’t see the point, students loathe it and print it out anyway at their own expense, and elearning gets a bad name.

There are instances where textual content designed for print is useful online: the student going abroad with a laptop who doesn’t want to carry all her books or the visually impaired learner who listens to texts with screen reader software. I suspect though that the majority of our learners will continue to read content designed for print on paper.

But computers are better than print for certain things and one of them is interactivity. Books can ask you questions and allow you to write an answer in a box but they can’t then check your response and give you instant feedback, record how you’re coping with the learning materials or prompt a tutor to contact you if you seem to be struggling.

That’s why a development announced today is quite significant. Our open source eassessment system, OpenMark, now allows single questions to be combined with other content within a Moodle page. You can read some text, then test your understanding of it without having to go into a separate test module. Suddenly textual content comes alive and makes sense (in small doses) in an online environment.

Openmark question in Moodle content