by Dr. Tharon W. Howard, Director, Clemson University Usability Testing Facility
Reprinted from Usability Interface, Vol 6, No. 3, January 2000
There are many pitfalls you should avoid when you make the transition to a user-centered approach to information product development and technical communication. However, I have observed Professional Communication graduate students and industry clients struggling to differentiate between “usability testing” and “validation testing.”
When they first begin testing the usability of their products, most companies and graduate students aren’t prepared to examine and fundamentally alter their information product development process. Instead, they use the same development process they’ve always used and merely add what they call a “usability test” at the end of the process. Indeed, for most people just starting out in this area, “usability testing” is synonymous with the traditional talk-aloud protocol analysis—i.e. test participants are assigned a set of tasks and asked to “think out loud” as they perform the tasks in a naturalistic environment. Newcomers often don’t realize that traditional protocol analysis is only one of at least 11 different methods typically deployed by usability testers at different stages of the product development process
These methods range from focus groups, ethnographic studies, context analyses, or comparative analyses used at the beginning of the product development process all the way to active intervention, co-discovery, or traditional protocol analyses used near the end of the process. Of course, it’s easy to understand why this confusion between the larger field of usability testing and the narrower business of validation testing happens. Since validation testing is a critical part of the larger usability testing process, it gets a lot of attention in our field. Also, setting design and product goals and then validating whether or not those goals were achieved is something that everyone from the VP for Marketing to the software engineer and technical communicator understands.
As a result, it’s relatively easy to convince management to approve the resources needed to add a “usability test” to the end of the process. However, the problems begin when the results from the testing come back. You’re almost certainly going to get data that finds flaws in the product and calls for an unplanned revision. And when you try to share that information with the project manager and the other members of the product development team, all too often they will view you as a “faultfinding whistleblower.” Because your so-called “usability” study will come too late in the process for them to take advantage of it, they will blame you for making their work look bad, encouraging them to sabotage any future attempts at enhancing “usability” in your company. And an even more serious consequence of this confusion between usability testing and validation testing is that it encourages management to view “usability testing” as too costly and too time-consuming for implementation.
Again, because it comes at the end of the process and requires unexpected, unbudgeted revisions, it will appear to management that “usability testing” merely delays a product’s release and increases development costs. To avoid these kinds of problems, it’s essential that all the members of a project team differentiate between validation testing and usability testing. Validation is critical part of the larger usability testing process and needs to be performed, but it also needs to be one of the last of a series of iterative usability tests which have been integrated into the entire product development cycle.