Ditch Wireframing, Go Prototyping

I run into a couple of articles arguing that traditional wireframing is not as useful as prototyping in the browser nowadays. They argue that designers should go straight from sketches to coding using HTML/CSS and JavaScript. Designers have to be able to code to some extent!

Why Prototyping Beats Wireframing

Ditch Traditional Wireframes : this gives good descriptions of different fidelities of wireframing. If you have time, please also read the comments to this article, there are many different opinions there.

Time to Dump Wireframes

Time to Dump Wireframes 2

How do you think?

Reading Reflection: Usability Testing Data Analysis and Reporting

For this week’s reading, at the beginning, I was confused among informal summative evaluation (quantitative), formal quantitative analysis, and formative evaluation (qualitative). Later into the reading, I learned that informal summative evaluation provides simple statistical analysis such as mean and standard deviation in order to check whether the UX reaches the UX targets. Informal summative evaluation doesn’t include inferential statistical analysis such as ANOVA, t-test, and F-test (these are used in formal quantitative analysis). It only serves to check whether the UX reaches the UX targets not to find the UX issues. Then the formative qualitative evaluation is set to find the issues. Here it seems that “summative” is quantitative, and “formative” is qualitative, and this is very misleading. The so-called “informal summative” evaluation is not necessarily always summative, it can leads to the next iteration if the UX targets are not reached. Also, I think it’s not necessary to indicate so clear that the informal summative evaluation is only to check whether the UX targets are reached. It could also help find the UX issues. Plus, as we discussed in class, where initially the “UX targets” come from is also a big question.

Other things I wish to remember:

1. It will be beneficial to keep a participant around to help with data analysis. Although this is not always possible, but if possible, this will help a lot with the data analysis, since “too often the problem analyst can only try to interpret and reconstruct missing UX data. The resulting completeness and accuracy become highly dependent on the knowledge and experience of the problem analyst.” (p.563)

2. The difference between critical incident and UX problem instances is that “Critical incident is an observable event (that happens over time) made up of user actions and system reactions, possibly accompanied by evaluator notes or comments, that indicates a UX problem instance. ” (p. 565)

3. Informal summative evaluation report is only supposed to keep as internal use–restricted to the project group (e.g. designers, evaluators, implementers, project manager). “Our first line of advice is to follow our principle and simply not let specific informal summative evaluation results out of the project group. Once the results are out of your hands, you lose control of what is done with them and you could be made to share the blame for their misuse. ” (p.596)

4. Common Industry Format (CIF) can be referred when we do our usability testing report.