Post test inquiry is perhaps the most important feature in human factors validation testing. We want to know everything we can about observed failures and difficulties that occur in the study. We follow up on these instances by asking specific questions about the errors observed in order to determine the aspects of the user interface that led to that failure or problem.
Sometimes we see very few errors or no errors on certain tasks in these validation studies. Does this mean there are no remaining problems with the user interface and that we have a safe to use system? What, if anything, do we do in this instance? We have learned, many times the hard way, that users may have valuable opinions and experience about the device user interface’s potential for confusion and use error even though we never observed them making an error in the validation test scenarios.
To uncover these potential “hidden” problems, we ask participants to reflect on any aspect of the design that might lead them, or a colleague, down an undesirable or even catastropic path. This data is just as important in making your case for overall use safety as are the individual use error investigations, and if omitted from your HF/UE report to FDA, may result in the agency asking you for this data.
It’s more efficient to have asked these open ended questions about the user interface design during formative evaluations and testing, but sometimes, the simulated use environment will bring these potential problems to light in ways that a more informal formative test would not.
What is your experience with these situations? We invite your thoughts on this topic!