Computer professionals have a need for robust, easy-to-use usability evaluation methods (UEMs) to help them systematically improve the usability of computer artefacts. However, cognitive walkthrough, heuristic evaluation, and thinking-aloud studies – three of the most widely used UEMs – suffer from a substantial evaluator effect in that multiple evaluators evaluating the same interface with the same UEM detect markedly different sets of problems. A review of eleven studies of these three UEMs reveals that the evaluator effect exists for both novice and experienced evaluators, for both cosmetic and severe problems, for both problem detection and severity assessment, and for evaluations of both simple and complex systems. The average agreement between any two evaluators who have evaluated the same system using the same UEM ranges from 5% to 65%, and no one of the three UEMs is consistently better than the others. While evaluator effects of this magnitude may not be surprising for a UEM as informal as heuristic evaluation, it is certainly notable that a substantial evaluator effect persists for evaluators who apply the strict procedure of cognitive walkthrough or observe users thinking out loud. Hence, it is highly questionable to use a thinking-aloud study with one evaluator as an authoritative statement about what problems an interface contains. Generally, the application of the UEMs is characterised by (1) vague goal analyses leading to variability in the task scenarios, (2) vague evaluation procedures leading to anchoring, and/or (3) vague problem criteria leading to anything being accepted as a usability problem. The simplest way of coping with the evaluator effect, which cannot be completely eliminated, is to involve multiple evaluators in usability evaluations.
|Tidsskrift||International Journal of Human-Computer Interaction|
|Status||Udgivet - 2003|