What you get is what you see

Revisiting the evaluator effect in usability tests

Morten Hertzum, Rolf Molich, Niels Ebbe Jacobsen

    Research output: Contribution to journalJournal articleResearchpeer-review

    Abstract

    Usability evaluation is essential to user-centred design; yet, evaluators who analyse the same usability test sessions have been found to identify substantially different sets of usability problems. We revisit this evaluator effect by having 19 experienced usability professionals analyse video-recorded test sessions with five users. Nine participants analysed moderated sessions; 10 participants analysed unmoderated sessions. For the moderated sessions, participants reported an average of 33% of the problems reported by all nine of these participants and 50% of the subset of problems reported as critical or serious by at least one participant. For the unmoderated sessions, the percentages were 32% and 40%. Thus, the evaluator effect was similar for moderated and unmoderated sessions, and it was substantial for the full set of problems and still present for the most severe problems. In addition, participants disagreed in their severity ratings. As much as 24% (moderated) and 30% (unmoderated) of the problems reported by multiple participants were rated as critical by one participant and minor by another. The majority of the participants perceived an evaluator effect when merging their individual findings into group evaluations. We discuss reasons for the evaluator effect and recommend ways of managing it
    Original languageEnglish
    JournalBehaviour and Information Technology
    Volume33
    Issue number2
    Pages (from-to)143-161
    ISSN0144-929X
    DOIs
    Publication statusPublished - 2014

    Cite this