Reading Reflection on “The 4th Paradigm of Science”

First of all, I’m really saddened by the missing of Jim Gray at sea. I’d like to express my deep condolences to him, and the recently missing MH370 plane, and all other planes and boats that have gone missing in the ocean.

According to this reading, the 4 paradigms of science are experimental, theoretical, simulation (computational) and data-intensive scientific discovery. But in my experience and observation, for most people today — researchers or not, their understanding of rigorous scientific research is still experimental.

For experimental research, the researchers usually have a question and a hypothesis about this question, then they would go out and collect some data to test this hypothesis. The scale of the data collected is usually less than thousands. These data are all collected for the purpose of answering the specific question, the data collection process is carefully designed to reduce bias, and increase generalizability and representativeness, so they are usually high quality data.

Nowadays, digital devices are everywhere, and they record volumes and volumes of data. Large part of the data were not recorded for answering specific questions. So “what do we do with these data? how do we turn them into insights?” are questions for the 4th paradigm of scientific discovery. Therefore, many research in this area are exploratory. Researchers in this area often get questioned by experimentalists “so what is your hypothesis?”

It goes without saying, the datasets for data-intensive science is large. However, large doesn’t mean good. Because these data are not collected to answer your specific research questions, so it takes great effort to curate the data in order to make them useful. As this reading says, there are 3 basic steps for data-intensive science: capture, curate, and analyze/visualize. Data curation could be the most time-consuming part. Therefore, Jim Gray advocates funding for generic data curation and analysis tools. However, my questions are (or parts that I don’t understand), is there really a generic way of cleaning and curating data? Aren’t any method of data cleaning essentially a type of subjective bias? Are there successful Laboratory Information Management Systems (LIMS) today 7 years later after this concept was brought up?

Tony Hey, Stewart Tansley, & Kristin Tolle. (Eds.). (2009). The 4th Paradigm: Data-Intensive Scientific Discovery.


JAVA_HOME and java.library.path

I’m working on a project where I need to set the JAVA_HOME path. Typing which java in the terminal, I get “/usr/bin/java”, but this is actually not the java_home. The right way is to use sudo /usr/libexec/java_home and then input the password of the administrative account of your computer as requested, you will get the java_home path. Mine (OSX Snow Leopard 10.6.8) is /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home

Then you can use JAVA_HOME=$(/usr/libexec/java_home) to set the java_home path, without literally typing the exact path.
You can then use either echo $JAVA_HOME or env | grep JAVA_HOME to confirm the path.

Another issue I had is with the java.library.path when using Eclipse. Even though I have included all the .jar files in the Eclipse project, I still got UnsatisfiedLinkError in Eclipse saying something is not found in the java.library.path. I had to add an .jnilib file in to the java library path. The safe way to do so without messing up with the java.library.path is to create a new user library in Eclipse project–>properties–>build path. After you create the new user library, click the arrow on the left of the new library title in the following dialog, and change the “native library location” from none to where the .jnilib file is located. Problems solved!

Ditch Wireframing, Go Prototyping

I run into a couple of articles arguing that traditional wireframing is not as useful as prototyping in the browser nowadays. They argue that designers should go straight from sketches to coding using HTML/CSS and JavaScript. Designers have to be able to code to some extent!

Why Prototyping Beats Wireframing

Ditch Traditional Wireframes : this gives good descriptions of different fidelities of wireframing. If you have time, please also read the comments to this article, there are many different opinions there.

Time to Dump Wireframes

Time to Dump Wireframes 2

How do you think?

Navigation and Selection among Multiple Factors

I’m looking for a new pair of glasses because my eye sight is sadly going downhill again. I stare at the computer screen for too long and work too hard 🙂 😦 So I launched, and selected the first one among “shop complete glasses”, “shop frame only” and “shop lenses only”. Here came a page with a super long list of factors to select. The following (Pic. 1) is a zoom-out version of the page clearly indicating how long the list is, with all the whitespace on the right. What’s worse, all the factors are listed on the left as a single click, and the user cannot choose multiple factors. It’s not dynamic, just as indicated in the second screen shot (Pic. 2). All the choices are in a plain list. All the numbers are listed, this is why it’s taking so much space. The number should be designed as a bounded input field with up-and-down arrows.  Moreover, parallel with “shape”,”rim”,”material”, “gender”, “brand”, “price”, “color”, “eye size”, “country of origin”, etc, there is one labelled “category”. “Category” is too much of a generic word. All the previous can be categories. This doesn’t indicate clear hierarchy  It should be renamed more specifically as “glass type” or something in a more specific nature.

Pic. 1

Pic. 2

Pic. 3

This website does have an advanced search function, but it also has problems. It has two separated parts which serve the same purpose. They explain “B Measurement” in the individual glass frame page, but not here in the search. This is very inconvenient for the users, since they would most likely land on this search page first, and then the individual glass frame page. Also, after I configured the advanced search, it took me to the results, I couldn’t further filter the results using the tool bar on the left. I had to go through the long result list one by one. When I went back to Advanced Search, all my previous configurations were gone, and I had to re-do it again.

Pic. 4

I think it is the bread-and-butter for online shopping websites to have good navigation features to allow users to find the products they like. The shopping site can be failing if the users cannot easily locate and browse what they want. Intelligent recommender system can also further help the users to get to the products they like. Amazon and eBay are doing a much better job in this regard. Admittedly, they are bigger and richer companies, but every big and rich company started from somewhere.

Reading Reflection: Usability Testing Data Analysis and Reporting

For this week’s reading, at the beginning, I was confused among informal summative evaluation (quantitative), formal quantitative analysis, and formative evaluation (qualitative). Later into the reading, I learned that informal summative evaluation provides simple statistical analysis such as mean and standard deviation in order to check whether the UX reaches the UX targets. Informal summative evaluation doesn’t include inferential statistical analysis such as ANOVA, t-test, and F-test (these are used in formal quantitative analysis). It only serves to check whether the UX reaches the UX targets not to find the UX issues. Then the formative qualitative evaluation is set to find the issues. Here it seems that “summative” is quantitative, and “formative” is qualitative, and this is very misleading. The so-called “informal summative” evaluation is not necessarily always summative, it can leads to the next iteration if the UX targets are not reached. Also, I think it’s not necessary to indicate so clear that the informal summative evaluation is only to check whether the UX targets are reached. It could also help find the UX issues. Plus, as we discussed in class, where initially the “UX targets” come from is also a big question.

Other things I wish to remember:

1. It will be beneficial to keep a participant around to help with data analysis. Although this is not always possible, but if possible, this will help a lot with the data analysis, since “too often the problem analyst can only try to interpret and reconstruct missing UX data. The resulting completeness and accuracy become highly dependent on the knowledge and experience of the problem analyst.” (p.563)

2. The difference between critical incident and UX problem instances is that “Critical incident is an observable event (that happens over time) made up of user actions and system reactions, possibly accompanied by evaluator notes or comments, that indicates a UX problem instance. ” (p. 565)

3. Informal summative evaluation report is only supposed to keep as internal use–restricted to the project group (e.g. designers, evaluators, implementers, project manager). “Our first line of advice is to follow our principle and simply not let specific informal summative evaluation results out of the project group. Once the results are out of your hands, you lose control of what is done with them and you could be made to share the blame for their misuse. ” (p.596)

4. Common Industry Format (CIF) can be referred when we do our usability testing report.

RAA4: Assisting Instructional Designers on the Model Driven Architecture in Technology Enhanced Learning Systems

Drira, R., Laroussi, M., Le Pallec, X., & Warin, B. (2012). Contextualizing learning scenarios according to different Learning Management Systems. Learning Technologies, IEEE Transactions On, Retrieved from



Technology Enhanced Learning (TEL): This paper defines TEL as a complex system formed by a set of interdependent and heterogeneous components (i.e., actors, tools, and learning objects) organized in space and time in order to satisfy a learning goal.

Learning Management Systems (LMS): This paper defines LMS as a software system that supports distance teaching and learning. “An LMS provides much relevant functionality for collaborative learning, assessment, and communication using extremely powerful tools such as forums, chats, wikis, blogs, quizzes, etc.”

This paper is in the context of instructional design in the Technology Enhanced Learning (TEL) systems. The authors state that there are many different Learning Management Systems (LMS), in order to achieve interoperability, organizations have developed Learning Technology Standards (LTS), but using standards have some drawbacks. For examples, it is too generic, and also the instructional designers must use a LTS compliant and thus cannot flexibly tailor the instructional design to the specific needs of the specific learning contexts. The instructional design thus lost the pedagogic expressiveness and contextual expressiveness.

One solution is to use the Model Drive Architecture (MDA) in the instructional design process to deal with the problem of system interoperability across different execution platforms. However, novice designers can have some technical difficulties to use this approach.

The Model Drive Architecture (MDA) approach in instructional design follow the following three steps:

1. A model of the intended system with a specific meta-model is defined. This meta-model allows an accurate description of specific needs.

2. A model transformation engine with specific rules is used to transform the preceding model into an LMS-specific model.

3. The specific model can be deployed on the LMS using an automatic generator/deployer.

Purpose of the Research: 

The focus of this paper is on the step 2 above. Novice designers usually have technical difficulties in transforming the models, and the purpose of this paper is to design a tool to help instructional designers in this process.


The authors propose an novel approach called ACoMoD (Assistance for Contextualized Modeling of Learning Systems), and therefore developed a graphical and interactive tool called Gen-COM. The Gen-COM integrates some best practices instructional designs recommended to the designers. Then the authors did a user study on this tool among 44 instructional designers. They asked for the users’ feedbacks on the usefulness of assistance for tailoring pedagogy with technical tools, usefulness of good practice recommendations, and the usability of Gen-COM.

Main Findings:

1. Designers found that Gen-COM was useful in tailoring pedagogy with LMS tools. Designers who are skilled in model transformation emphasized that Gen-COM offers a powerful transformation mechanism.

2. Although the integration of best practices in the design process is useful for novices, it is less so for designers who are very familiar with their institutional context.

3. Gen-COM clearly separates the work space for matching pedagogy and technology from the best practice reminders.

4. Most designers state that they are more likely to use a model-driven approach with tools like Gen-COM, which hide technical difficulties while allowing them to benefit from many advantages.  These advantage include interoperability, reuse, and personalization.


Some parts of this paper get a bit technical and difficult to read, and the authors use a lot of acronyms. This paper is not directly related to user experiences, but I found it implies an issue between the UX team and the developers team, that is, the UX team strives to do user-centered design, and they want the design to tailor to the specific contexts of the users. However, the developers want standards, interoperability, and reuse.  They do not want to redesign everything for new user contexts and needs, they want to use the frameworks they developed before. This might be even true for novice developers, and for a small developer team, because designing for specific contexts takes time and expertise. When the time is limited, and the development team is small and full of novice developers, the developers will feel their situations are not understood by the UX researchers. And here comes the biggest issues of communication, and imperfection in the final products. This paper is trying to fill this gap by designing a tool for the designers to reduce the technical requirement on them.

Tangible on the Touch Interface: Apptivity Toys on the iPad

I run into an Apptivity toys commercial online. I think it’s a pretty cool idea. Apptivity toys are physical toys that you can use on the surface of the iPad. For one thing, it extends the limits of hand gestures and finger sizes. For another, it extends the limits of the touch surface into the psychical world. It’s a combination of touch user interface and tangible user interface. So children who grow up with tablet touch surfaces will also have the direct opportunities to play the physical toys with the virtual world. I can see that it will be even cooler to play these toys on a larger touch screen than the iPad. I understand that they made the toys suitable for the iPad because it would be easier to be adopted since so many people already have the iPad. This is one step further to integrate the physical world and the virtual world.