Evaluators who examine the same system using the same usability evaluation method tend to report substantially different sets of problems. This so-called evaluator effect means that different evaluations point to considerably different revisions of the evaluated system. The first step in coping with the evaluator effect is to acknowledge its existence. In this study 11 usability specialists individually inspected a website and then met in four groups to combine their findings into group outputs. Although the overlap in reported problems between any two evaluators averaged only 9%, the 11 evaluators felt that they were largely in agreement. The evaluators perceived their disparate observations as multiple sources of evidence in support of the same issues, not as disagreements. Thus, the group work increased the evaluators’ confidence in their individual inspections, rather than alerted them to the evaluator effect.
|Titel||CHI 2002 : Extended Abstracts|
|Forlag||Association for Computing Machinery|
|Status||Udgivet - 2002|
|Begivenhed||CHI2002 - Minneapolis, MN, USA|
Varighed: 20 apr. 2002 → 25 apr. 2002
|Periode||20/04/2002 → 25/04/2002|
Bibliografisk notePublikationen skal IKKE medtages i årsberetningen, den indberettes til brug i udtræk internt på datalogi
Hertzum, M., Jacobsen, N. E., & Molich, R. (2002). Usability Inspections by Groups of Specialists: Perceived Agreement in Spite of Disparate Observations. I CHI 2002: Extended Abstracts (s. 662-663). Association for Computing Machinery.