Reading Reflection: Usability Testing Data Analysis and Reporting

For this week’s reading, at the beginning, I was confused among informal summative evaluation (quantitative), formal quantitative analysis, and formative evaluation (qualitative). Later into the reading, I learned that informal summative evaluation provides simple statistical analysis such as mean and standard deviation in order to check whether the UX reaches the UX targets. Informal summative evaluation doesn’t include inferential statistical analysis such as ANOVA, t-test, and F-test (these are used in formal quantitative analysis). It only serves to check whether the UX reaches the UX targets not to find the UX issues. Then the formative qualitative evaluation is set to find the issues. Here it seems that “summative” is quantitative, and “formative” is qualitative, and this is very misleading. The so-called “informal summative” evaluation is not necessarily always summative, it can leads to the next iteration if the UX targets are not reached. Also, I think it’s not necessary to indicate so clear that the informal summative evaluation is only to check whether the UX targets are reached. It could also help find the UX issues. Plus, as we discussed in class, where initially the “UX targets” come from is also a big question.

Other things I wish to remember:

1. It will be beneficial to keep a participant around to help with data analysis. Although this is not always possible, but if possible, this will help a lot with the data analysis, since “too often the problem analyst can only try to interpret and reconstruct missing UX data. The resulting completeness and accuracy become highly dependent on the knowledge and experience of the problem analyst.” (p.563)

2. The difference between critical incident and UX problem instances is that “Critical incident is an observable event (that happens over time) made up of user actions and system reactions, possibly accompanied by evaluator notes or comments, that indicates a UX problem instance. ” (p. 565)

3. Informal summative evaluation report is only supposed to keep as internal use–restricted to the project group (e.g. designers, evaluators, implementers, project manager). “Our first line of advice is to follow our principle and simply not let specific informal summative evaluation results out of the project group. Once the results are out of your hands, you lose control of what is done with them and you could be made to share the blame for their misuse. ” (p.596)

4. Common Industry Format (CIF) can be referred when we do our usability testing report.

Advertisements

Reading Reflection: Usability Testing

We read two chapters on usability testing this week. The UX book refers usability testing as rigorous evaluation, while activities such as experts inspection and heuristic evaluation as rapid evaluation. Is the one called rigorous actually more rigorous or effective? This makes me think about Apple again. Sometimes, the experts’ experiences combined with their understanding of the users’ goals, and with proper marketing strategies can serve longer way in the product’s life. Understandably, the word “rigorous” here is in terms of the overall scientific procedures of the methods, yet throughout the text, it actually emphasizes a lot of the difference between the UX evaluation, and formal scientific research (esp. quantitative research). For example, (1) Quantitative UX data collection and analysis “is emphasized less than it used to be in previous usability engineering books because of less focus in practice on quantitative user performance measures and more emphasis on evaluation to reveal UX problems to be fixed.” (p.504) (2) When talking about participants recruiting, the term “sample” is not appropriate, “because what we are doing has nothing to do with the implied statistical relationships and constraints.” (p.511) (3) The authors spend a lot of text discussing the proper number of participants, saying it ultimately depends on the specific cases, and the UX researchers’ experience and intuitions. In a word, UX evaluation is very practical activities focusing on fixing the real problems.

Other things I wish to remember for preparing the UX evaluation: 

1. It is very important to set the goal, and define the scope of the evaluation. “Do not let short-term savings undercut your larger investment in evaluation and in the whole process lifecycle.” (p.504) Choosing the evaluation methods and techniques should be goal-driven.

2. We should try to engage as many as people in the whole team in at least some evaluation. It is not only the UX team’s job to do the evaluation, other members of the project should be involved as much as possible. This can begets ownership, and is necessary for the UX evaluation results to be taken seriously and solve the problems.

3. When preparing the tasks for the evaluation, we need to allow time for the users to do free exploration, and we can also ask the users to define tasks.

Things I wish to remember for running the evaluation session: 

1. There are roughly four types of data: quantitative UX data (observed data and subjective data obtained mainly using questionnaires) , qualitative UX data (mainly, critical incidents collection), emotional impact data, and phenomenological data (how the products integrate into the users’ lifestyle, mainly collected using variations of diary-based methods).

2. The UX book mentions the HUMAINE project on emotional impact research. I went ahead and checked the report, and would like to share here: The HUMAINE project website. The HUMAINE report on taxonomy of affective system usability testing.