We read two chapters on usability testing this week. The UX book refers usability testing as rigorous evaluation, while activities such as experts inspection and heuristic evaluation as rapid evaluation. Is the one called rigorous actually more rigorous or effective? This makes me think about Apple again. Sometimes, the experts’ experiences combined with their understanding of the users’ goals, and with proper marketing strategies can serve longer way in the product’s life. Understandably, the word “rigorous” here is in terms of the overall scientific procedures of the methods, yet throughout the text, it actually emphasizes a lot of the difference between the UX evaluation, and formal scientific research (esp. quantitative research). For example, (1) Quantitative UX data collection and analysis “is emphasized less than it used to be in previous usability engineering books because of less focus in practice on quantitative user performance measures and more emphasis on evaluation to reveal UX problems to be fixed.” (p.504) (2) When talking about participants recruiting, the term “sample” is not appropriate, “because what we are doing has nothing to do with the implied statistical relationships and constraints.” (p.511) (3) The authors spend a lot of text discussing the proper number of participants, saying it ultimately depends on the specific cases, and the UX researchers’ experience and intuitions. In a word, UX evaluation is very practical activities focusing on fixing the real problems.
Other things I wish to remember for preparing the UX evaluation:
1. It is very important to set the goal, and define the scope of the evaluation. “Do not let short-term savings undercut your larger investment in evaluation and in the whole process lifecycle.” (p.504) Choosing the evaluation methods and techniques should be goal-driven.
2. We should try to engage as many as people in the whole team in at least some evaluation. It is not only the UX team’s job to do the evaluation, other members of the project should be involved as much as possible. This can begets ownership, and is necessary for the UX evaluation results to be taken seriously and solve the problems.
3. When preparing the tasks for the evaluation, we need to allow time for the users to do free exploration, and we can also ask the users to define tasks.
Things I wish to remember for running the evaluation session:
1. There are roughly four types of data: quantitative UX data (observed data and subjective data obtained mainly using questionnaires) , qualitative UX data (mainly, critical incidents collection), emotional impact data, and phenomenological data (how the products integrate into the users’ lifestyle, mainly collected using variations of diary-based methods).
2. The UX book mentions the HUMAINE project on emotional impact research. I went ahead and checked the report, and would like to share here: The HUMAINE project website. The HUMAINE report on taxonomy of affective system usability testing.