Reading Reflection: Usability Testing Data Analysis and Reporting

For this week’s reading, at the beginning, I was confused among informal summative evaluation (quantitative), formal quantitative analysis, and formative evaluation (qualitative). Later into the reading, I learned that informal summative evaluation provides simple statistical analysis such as mean and standard deviation in order to check whether the UX reaches the UX targets. Informal summative evaluation doesn’t include inferential statistical analysis such as ANOVA, t-test, and F-test (these are used in formal quantitative analysis). It only serves to check whether the UX reaches the UX targets not to find the UX issues. Then the formative qualitative evaluation is set to find the issues. Here it seems that “summative” is quantitative, and “formative” is qualitative, and this is very misleading. The so-called “informal summative” evaluation is not necessarily always summative, it can leads to the next iteration if the UX targets are not reached. Also, I think it’s not necessary to indicate so clear that the informal summative evaluation is only to check whether the UX targets are reached. It could also help find the UX issues. Plus, as we discussed in class, where initially the “UX targets” come from is also a big question.

Other things I wish to remember:

1. It will be beneficial to keep a participant around to help with data analysis. Although this is not always possible, but if possible, this will help a lot with the data analysis, since “too often the problem analyst can only try to interpret and reconstruct missing UX data. The resulting completeness and accuracy become highly dependent on the knowledge and experience of the problem analyst.” (p.563)

2. The difference between critical incident and UX problem instances is that “Critical incident is an observable event (that happens over time) made up of user actions and system reactions, possibly accompanied by evaluator notes or comments, that indicates a UX problem instance. ” (p. 565)

3. Informal summative evaluation report is only supposed to keep as internal use–restricted to the project group (e.g. designers, evaluators, implementers, project manager). “Our first line of advice is to follow our principle and simply not let specific informal summative evaluation results out of the project group. Once the results are out of your hands, you lose control of what is done with them and you could be made to share the blame for their misuse. ” (p.596)

4. Common Industry Format (CIF) can be referred when we do our usability testing report.

Advertisements

Reading Reflection: Usability Testing

We read two chapters on usability testing this week. The UX book refers usability testing as rigorous evaluation, while activities such as experts inspection and heuristic evaluation as rapid evaluation. Is the one called rigorous actually more rigorous or effective? This makes me think about Apple again. Sometimes, the experts’ experiences combined with their understanding of the users’ goals, and with proper marketing strategies can serve longer way in the product’s life. Understandably, the word “rigorous” here is in terms of the overall scientific procedures of the methods, yet throughout the text, it actually emphasizes a lot of the difference between the UX evaluation, and formal scientific research (esp. quantitative research). For example, (1) Quantitative UX data collection and analysis “is emphasized less than it used to be in previous usability engineering books because of less focus in practice on quantitative user performance measures and more emphasis on evaluation to reveal UX problems to be fixed.” (p.504) (2) When talking about participants recruiting, the term “sample” is not appropriate, “because what we are doing has nothing to do with the implied statistical relationships and constraints.” (p.511) (3) The authors spend a lot of text discussing the proper number of participants, saying it ultimately depends on the specific cases, and the UX researchers’ experience and intuitions. In a word, UX evaluation is very practical activities focusing on fixing the real problems.

Other things I wish to remember for preparing the UX evaluation: 

1. It is very important to set the goal, and define the scope of the evaluation. “Do not let short-term savings undercut your larger investment in evaluation and in the whole process lifecycle.” (p.504) Choosing the evaluation methods and techniques should be goal-driven.

2. We should try to engage as many as people in the whole team in at least some evaluation. It is not only the UX team’s job to do the evaluation, other members of the project should be involved as much as possible. This can begets ownership, and is necessary for the UX evaluation results to be taken seriously and solve the problems.

3. When preparing the tasks for the evaluation, we need to allow time for the users to do free exploration, and we can also ask the users to define tasks.

Things I wish to remember for running the evaluation session: 

1. There are roughly four types of data: quantitative UX data (observed data and subjective data obtained mainly using questionnaires) , qualitative UX data (mainly, critical incidents collection), emotional impact data, and phenomenological data (how the products integrate into the users’ lifestyle, mainly collected using variations of diary-based methods).

2. The UX book mentions the HUMAINE project on emotional impact research. I went ahead and checked the report, and would like to share here: The HUMAINE project website. The HUMAINE report on taxonomy of affective system usability testing.

 

Reading Reflection: Information Organization and Prototype

Information Architecture Reading from Brinck, Gergle, & Wood :

1. What is information architecture?

Information architecture refers to the structure or organization of the website, especially how each page relates to each other. This involves content analysis and planning. Information architecture is rooted from database design and information retrieval, and strongly influenced by HCI, library science, and  the psychology of how human beings navigate and organize information.

2. What are some user-centered techniques for creating the information architecture?

Information architecture can be created based on the strict logic of the content, but this is rarely the case. Most of the time, there is no single logical organization of the content, even if there is, this may not be the best for the users. Because users are not always completely logical, neither are they always very clear about their specific goals of navigating the website. This reminds me the MEMEX machine by Vannevar Bush we discussed at the beginning of this semester which reminds us that human beings organize information based on associations rather than hierarchy. User-centered information architecture design takes human beings’ perception and cognition into consideration. There are roughly top-town and bottom-up approaches to create information architecture. When designers encounter uncertainty in information architecture design, a useful method is to let the users do cart sorting activities.

3. What are various website topologies (ways of organizing information)?

The topology is the primary way that pages are linked together. The most popular one is hierarchy or tree. Other website topologies include linear, matrix (or Grid), full mesh, arbitrary network, and hybrid topology. It is important to consider the breadth (7) and depth (3) of the hierarchy. Typically, the breadth of the tree should be less than seven branches (though the authors of this book do not think this number is justified), and the depth of the tree should not exceed three levels. Semantics is a task-based and more user-friendly method of organizing information.

The Two Research Articles on Information Organization:

What navigation topologies and structures are better in what kind of situations?

How users navigate information depends on whether they are clear about what they are looking for. When the task is more specific, the users tend to use the traditional search interface; when the task is more general, tag cloud is preferred. Usage-oriented hierarchy is preferred compared with subject-oriented hierarchy under all circumstances. To sum it up, how users navigate information depends on their tasks, so that the design of information architecture should match all levels of users’ tasks.

UX Book Chapter 11 On Prototyping:

1. How do you choose between the different types and fidelity levels of prototypes?

How to choose the fidelity levels of the prototypes depends on the stage of progress of the overall project and the design perspective that is prototyped. The UX book mentions the horizontal and vertical prototyping. I feel I’m not very clear about the differences between features and functionality. What is feature? What is functionality? Why feature is horizontal, while functionality is vertical?

2. What are some pitfalls to avoid when prototyping?

Prototypes need to match the perspectives of the products. Viewers of the prototypes need to be chosen carefully. Low-fidelity prototyping needs to be carefully explained to the viewers if they are outside of the design team. Get organizational understanding that prototyping can reduce risks both on software engineering and the UX sides. Be honest about limitations of the prototyping and do not overwork the prototypes.

Reflection on iPhone App Design Session

We had a design session to sketch the interface of an iPhone app where students can view their GPA and grades. The following are the flow charts and the sketches from our team.

Major take-aways from this session are:

1. It takes great level of familiarity to the elements of the phone from the designer’s perspective to be able to design good interface. Two of us are iPhone users and the other two are Android users, so our sketches kind of contain a mixture of both iPhone and Android elements, even though the task we got was to design an iPhone app. Even for the two iPhone users in our team, we typically only use the iPhone, rather than think about it from the designer’s perspective.

2. When we designed the app, our team members did not feel the need to enable the users to view the GPA of each semester, because the hypothetical persona we produced is a college junior looking for an internship. We didn’t think he would need to look at one specific semester’s GPA. We thought he would need to provide the recruiters the overall GPA or core courses GPA, so we designed the function of viewing core courses GPA but we didn’t design the function of viewing the GPA of a specific semester. However, when we started the small tour around the classroom to look at other teams’ design, we saw other groups having this function of viewing GPA of a specific semester, then we started to drift away from our hypothetical persona’s need. We started to think that maybe the following modification can be made to enable users to view the GPA of a specific semester. This is not a technically difficult change at all, but is there really the need, or are we just influenced by other teams’ designs? We didn’t really know enough of our users to make this decision.

In summary, we need a great deal of understanding of both the phone platform and the users to produce good design. The persona needs to be carefully produced and grounded in data, because it points the context and the needs of the users. The context changes, the need changes.

Reading Reflection: Persona, Scenario, Goal-Directed Design

Major take-aways from this week’s reading on persona, scenario, and requirement is goal-directed design:

(1) There are three different types of user goals: experience goals, end goals, and life goals. I hope we read these details at the beginning of this semester when we started to talk about goal-oriented design. Until now, I was mostly thinking goals as life goals, and found it is difficult to translate to specific design. Others may have had different understandings of goals. It is important to achieve a consensus of what types of goals are being discussed. I am mostly interested in the life goals, which is related to reflective processing and design for reflection. I think this is what Slow Technology is aiming at.

(2) User goals are user motivations. I was not quite sure about the relationship between goals and motivations. Now I realize that user motivations are captured in the form of user goals.

(3) User goals are to be inferred from the qualitative data, not directly stated by the users, because the users may not be able to articulate their goals, or they may not be prepared to honestly talk about their goals. This is why when we asked the interviewees what were their life goals in last week’s class, most interviewees feel this was a really vague, broad, and difficult question.

(4) Experience goals, end goals, and life goals are three types of user goals. Beside user goals, there are custom goals, business and organizational goals, and technical goals. Good design puts user goals first, but other goals also need to be considered.

(5) There are differences between scenarios and use cases. Scenarios are from the users’ perspective, while use cases are mostly originated from the software engineering and system engineering process.

(6) In early stages of design, pretend the interface has magic power! This is useful as a reminder for us to be creative. I usually consider constraints a lot in doing many things in my life, not only for designing interface. This habit of thinking constraints in return constrains my creativity. Only when we can imagine the most magic way to meet the users’ goals, we can then figure out ways to overcome other constraints as much as we possibly can.

Reflection on User Interview Questions

Our group came up with a very tentative list of questions for conducting a pilot user interview in class today. Based on Cooper, we set to understand the users’ goals rather than getting into too many particulars of the website in a limited time. We started from brainstorming our interaction with the interviewees, so you could see questions like “why?” and “how are you?” there.

Because I had some experiences and read some texts in conducting one-on-one interviews before, so I feel I had lots of critiques for these questions. First, the sequence of the questions is not right. There is no smooth transitions between questions. One group member thinks this keep the interviewees refreshed and surprised, and motivated to expect interesting questions. But I feel it’s better to make the questions flow, and make the entire process like a natural and deep conversations rather than interesting and surprising Q&A. Questions about demographics and background should come first, and ask about life goals a little bit later, because these are deeper level questions. Conceptually, goals always come first, but this doesn’t mean we should ask about goals first in operation, because this is a very broad and vague question to start. Second,  the spirit of these questions is to lead the interview, and the interviewers should not read exactly each of these questions. More other questions should follow up according to the interviewees’ responses. Most of these questions are written in formal language. For example, “which organization are you a member of?”. This looks like questions that would be in a questionnaire, but not an interview. Maybe we should ask “Do you belong to any organizations? What are they?” We should make the interview a flow natural conversation, rather than follow exactly the questions. Third, question like “if you could take the form of any animal, what would it be?” may serve to understand the users’ characters to some extent, but this question is a little drifting away from our main purpose in limited time. Finally, I realize I’m getting too critical again, think positively, the initial intention we set is great, we would like to understand the users’ goals, motivations to pursuit graduate studies, rather than putting them in the role of telling us what the website should be. We do get some insights about the general characteristics of the users, and what are things that they expect and like. But our questions are a bit too broad and vague to get much detailed insights out.

Appendix: Brainstorming User Interview Questions

  1. Why?
  2. How are you?
  3. What are your life goals?
  4. How do you expect graduate studies to help you?
  5. Where are you from?
  6. What is your academic history?
  7. What is your work experience?
  8. What is your favorite color?
  9. What are your favorite websites?
  10. What organizations are you a member of?
  11. How do you use the current CGT grad school website? (If you do.)
  12. What problems do you have with the current CGT grad school website?
  13. If you could take the form of any animal, what would it be?
  14. What do you do in a typical day?

Reading Reflection: Qualitative User Research

What are the goals of user research?

The first goal is to to understand users’ goals and motivations. According to Cooper, goals alway go first, and tasks second, but the UX book emphasizes the particulars of the work practices more. I feel every time I do the reading, Cooper and the UX book often have some seemingly contradictory philosophies at the beginning, but they converge somehow as I proceed. They just have the different approaches.

Second to the first is to undertand the users’ work context, work flow, and work practices, in order to better design the products to fit their needs.

What are some available tools (research methods) for conducting user research?

Ethnographic interviewing (user observation and one-on-one interview), contextual analysis, card sorting, task analysis (can be one type of the ethnographic interviewing), marketing strategies, etc.

What are some big mistakes (pitfalls) to avoid when conducting user research?

Avoid leading questions (This is a good reminder for me. I feel I would do that if I couldn’t think of any good questions, and the users seem do not have much more expectations than what he/she already has now. )

Avoid making the users a designer (should not expect them to be a designer, or visionary at the beginning, let alone “making”) .

Avoid discussing technology.

Avoid being lost in detailed questions without knowing the bigger goals (Again, from Cooper).

Distinguish users and customers (I wasn’t quite aware of this before. Now I see the difference between the customers and users. Customers are the ones who buy the products, and users are the ones who use the products. The one who buy it is not necessarily the one who use it.)

A Common Vision

An overall take-away is that terms like “usability”, “user experiences”, and “user-centered designe”, are misleading in a way since the whole process of design is actually not only about studying the users. One important thing is that different departments of the organization who commissions the product have to have a common vision at first, and they should see much further than the users can see.