Usability Inspections by Groups of Specialists: Perceived Agreement in Spite of Disparate Observations

Morten Hertzum, Niels E. Jacobsen, Rolf Molich

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearch

Abstract

Evaluators who examine the same system using the same usability evaluation method tend to report substantially different sets of problems. This so-called evaluator effect means that different evaluations point to considerably different revisions of the evaluated system. The first step in coping with the evaluator effect is to acknowledge its existence. In this study 11 usability specialists individually inspected a website and then met in four groups to combine their findings into group outputs. Although the overlap in reported problems between any two evaluators averaged only 9%, the 11 evaluators felt that they were largely in agreement. The evaluators perceived their disparate observations as multiple sources of evidence in support of the same issues, not as disagreements. Thus, the group work increased the evaluators’ confidence in their individual inspections, rather than alerted them to the evaluator effect.
Original languageEnglish
Title of host publicationCHI 2002 : Extended Abstracts
PublisherAssociation for Computing Machinery
Publication date2002
Pages662-663
ISBN (Print)1-58113-454-1
Publication statusPublished - 2002
Externally publishedYes
EventCHI2002 - Minneapolis, MN, United States
Duration: 20 Apr 200225 Apr 2002

Conference

ConferenceCHI2002
CountryUnited States
CityMinneapolis, MN
Period20/04/200225/04/2002

Cite this