Updates on Term Project

I found the College Student Experience Questionnaire and an article about a campus-wide focus on student experience.

The following figure is from the article, and it provides a framework for thinking about students’ experience as a whole. I may adopt this model for my data analysis, though not data in every categorizes in this model can be found online.

Figure from: Keeling, R. P. (2004). Learning Reconsidered: A Campus-Wide Focus on the Student Experience. National Association of Student Personnel Administrators.

Advertisements

Reflection on the Dark Side Presentation

First of all, I think we are very brave to try out one class remotely. It is very important experience for a class studying the Internet. I don’t know whether previous students in this class have tried remote classes or not, I hope Dr.V could let students in the future continue to try, although it may not be a pleasant experience. Hope students in the future could find a better software other than Adobe Connect to do this. Any revolution or progress will not happen if we always shy away from unpleasant possibilities. The key here is not to avoid the wrong things, but to know what is wrong, and make the wrong things right.

Second, regarding what is wrong, I think the key failure point is that each of us has too much freedom for ourselves, but not enough freedom for participating the class with Adobe Connect. We cannot see others if they are not the current presenter, and other people cannot see us if we are not presenting. Other people cannot hear us either if we do not press the “talk” button. We lost the supervision, and self-discipline doesn’t work all the time. Basically we could do anything else–eat, do homework for other subjects, check Facebook feed, etc. We had too much freedom, and I ate too many milk duds while listening to others’ talk. Adobe Connect may be perfect for people who have to attend boring conferences all the time, but definitely not for engaging students in class.  Although all of us had talk priorities, we had to press the “talk” button at the risk of interfering the presenter, overloading the network and introducing lots of echos. So many of us would rather type in the chat box rather than speak. The social affordance of Adobe Connect doesn’t support free discussion as much as in Google+ Hangout.

Finally, Adobe Connect is designed for formal conferences, and Google+ Hangout is designed for informal friendly hangout. Experiences of using them for Tech621 made me realize they do have lots of differences depending on their specific purposes. Any such software in the future, if the purpose is for remote class, should consider how to engage students (let students want to and feel easy to participate) as the highest priority.

#RAA5: Impact of Web Personalization on User Information Processing and Decision Outcomes

1. APA Citation:

Tam, K. Y., & Ho, S. Y. (2006). Understanding the impact of web personalization on user information processing and decision outcomes. Mis Quarterly, 30(4), 865–890. PDF
2.  Purpose of the Research: To understand the impact of personalized content on user information processing and decision making, because little is known about the effectiveness of web personalization and the link between the IT artifact (the personalization agent) and the effects it exerts on a user’s information processing and decision making.
3. Methods: Theoretically develops and empirically tests a model of web personalization. The model is grounded on social cognition and consumer research theories. The research model depicts the different stages of web processing as (1) attention, (2) cognitive processing, (3) decision, and (4) evaluation of decision. The model highlights two sets of variables hypothesized to have an impact on these four stages. The two sets of variables are related to (1) web personalization and (2) goal specificity. The variables related to web personalization are: self reference and content relevance. Hypotheses were generated from the research model, and were empirically tested in a laboratory experiment and a field study.
The hypotheses are (also refer to the figure bellow):
Hypotheses related to Self Reference in Web Personalization:
H1: Users attend to self-reference web content to a larger extent than they attend to non-self-reference web content.
H2a: Users recall self-referent web content faster and more accurately than they recall non-self-referent web content.
H3a: Users exposed to self-referent web content will seek less information and spend less time on decision making than when they are exposed to non-self-referent web content.
H4a: Users accept offeres associated with self-referent web content to a larger extent than they accept offers associated with non-self-referent web content.
Hypotheses related to Content Relevance in Web Personalization:
H2b: Users recall web content relevant to their processing goal faster and more accurately than they recall irrelevant web content.
H4b: Users accept offers associated with relevant web content to a larger extent than they accept offers associated with irrelevant web content.
Hypotheses related to Processing Goal Specificity:
H2c: There is a larger difference in recall accuracy and response time between relevant and irrelevant web content for users with more-specific processing goals than for those with less-specific processing goals.
Hypotheses related to Evaluation:
H5a: Users evaluate self-referent web content more highly than they evaluate non-self-referent web content.
H5b: Users evaluate relevant web content more highly than they evaluate irrelevant web content.
The controlled lab experiment focuses on the tree variables hypothesized to attract users’ attention, affect their level of cognitive processing, and bias their decisions. The field study is based on a music download site and lasting for 6 weeks. They examined users’ behaviors by analyzing their web activities. Contents of the music site were driven by a commercial personalization agent and all activities of the web site were logged for the entire 6-week period.
4. Main Findings: The findings from the lab experiment and field study indicate that content relevance, self reference, and goal specificity affect the attention, cognitive processes, and decisions of web users in various ways. Also users are found to be receptive to personalized content and find it useful as a decision aid. Major findings are summarized in the table bellow. Only H2a is not supported (while content relevance leads to better re-call of the content, this is not obvious for self-relevance), and all other hypotheses are supported with statistical significance.
5. Analysis: This article provide good information on web personalization and how it impacts users’ decision outcomes. It also provides a snapshot on other related research of this line. Most research on web personalization comes from business management, e-commerce, marketing, etc. No matter what they do is to understand consumers, or to better design the personalization agents, or anything else, the ultimate goal is to maximize business opportunities, to sell products and to gain profit. They do not usually consider other side effect of personalization, such as echo chamber effect. I am not sure whether echo chamber effect will negatively or positively affect companies’ business. This may affect some small companies to get their new products rolling because it’s difficult for new things to get into the bubble of the consumers. However, maybe big companies who already got the consumers around their products love this to happen. As far as I know, the after effect to consumers, and the large impact to society of personalization are usually not considered in this line of research. Indeed, these issues may be a little out of scope of e-commerce research, and they may be usually addressed in other fields.

Personalization: Machine Mirrors the Ugly Us that We don’t Want to See so We Blame the Machine

There is a fair amount of discussion that Google search, Amazon recommendation and Facebook streams are too much personalized and making us closed minded, the so called filter bubble or echo chamber effect. I’ve been thinking this for a while, and I have a new idea about what makes personalization and recommendation bad.

It is not the machine, not the algorithms. It is human nature. The machine is learning from the human beings eventually, and the machine is just augmenting human reality. My argument is that for a person who has relatively open and balanced mind, the machine personalization results will be fairly balanced, and the recommendation results will serve good knowledge discovery. Only for persons who initially themselves have huge biased opinions and worldviews towards certain things, the machine personalization results will be biased. I don’t think the machine is making things worse, the machine is just reflecting the reality, and it is good that machine is making us see the reality so we can figure out a way to solve it. What results these biased and closed opinions are essentially human nature, and it’s not the machine’s responsibility to solve this problem.

Close minded people always exist, no matter whether Google search exists. Even if there is no Google search and other things, these people will not seek or listen opinions outside of their chamber what so ever. We dream that actually machine can solve this problem by providing opposite and diversity opinions (effort like Findory news), but the thing is not that easy, it is difficult to move people out of their comfort zone, so Findory failed.

To solve this problem, we have to open our mind first, or some of us. Then we figure out a way that could open other people’s mind more effectively, and make the machine do it.

To sum up, what I am arguing it that, at this moment, machine personalization and recommendation is not doing bad things (not that good either, just fact) it’s just reflecting human reality. What we have to realize is that the problem is not the machine, it’s ourselves, machine is just letting us see our flaws that we don’t want to face sometimes. In the future, we need to figure out ways to let the machine do good on this.

RAA#4: Social Discovery

1. APA Citation

Shneiderman, B. (2011). Social discovery in an information abundant world: Designing to create capacity and seek solutions. Information Services and Use, 31(1), 3–13. PDF

2. Definition:

Social Discovery: the collaborative processes that promote creating capabilities and seeking solutions. “In its most ambitious form social discovery is the detection of new and important relationships, patterns or principles that advance disciplines and make valuable contributions to society.”

3. Purpose

Propose a framework of design of search tools that support social discovery.

4. Methods

Based on previous theories of information seeking, work in CSCW and the Reader-to-Leader framework of social participation.

Figuire1. The Reader-to-Leader framework suggests that the typical path for social media participation moves from reading online content to making contributions, initially small edits, but growing into more substantive contributions. The user-gen- erated content can be edits to a wiki, comments in a discussion group, ratings of movies, photos, music, animations or videos. Collaborators work together over periods of weeks or months to make more substantial contributions, and leaders act to set policies, deal with problems, and mentor new users.

5. Main Findings

(1) The shift of searching tools: specific fact-finding–> open-ended exploratory search –> social discovery (collaboration in creating capabilities and seeking solutions)

(2) The Social Discovery framework. “The social discovery concept extends the ideas from the creativity and discovery support tools based on information visualization, team coordination and design tools.” It emphasize not only the information seeking, but also participation and creativity. “Valuable contributions also come from those who tag, taxonomize, comment, annotate, rank, rate, review and summarize.”

Figure 2. The Social Discovery framework describes the two stages of work: creating capacity and seeking solutions. These are carried out by a dialog between those who initiate requests and those who provide responses over a period of weeks and months.

5. Analysis: This is a theory paper from a computer scientist. Ben Sheinderman is a big figure in HCI and Information Visualization. He advocates the revolution of science to science 2.0 to really consider real social problems utilizing the web. This is a framework or guideline on design of computation tools that better support human and their social interaction in the processing of searching for knowledge. It is set to “promote thinking about and conducting research into the mechanisms that facilitate social discovery”. It mentions that “the implications are profound for academic, industrial and government researchers, since they force re-consideration of reward structures, especially for creating capabilities, which deserve more recognition in tenure or promotion reviews.” I am excited to see a big figure in computer science really embraces the idea of social media to do good for our humanity, and admire a lot of his thoughts and insight.

RAA#3: CommentSpace, Collaborative Visual Analytics

  1. APA Citation:
    Willett, W., Heer, J., Hellerstein, J., & Agrawala, M. (2011). CommentSpace: structured support for collaborative visual analysis. Proceedings of the 2011 annual conference on Human factors in computing systems (pp. 3131–3140). PDF

    CommentSpace website

  2. Purpose:  (1) Present details of a web-based collaborative visual analysis system CommentSpace that aims to help users better make sense of the visualizations through synthesizing others’ comments. CommentSpace “enables analysts to annotate visualizations and apply two additional kinds of structure: 1) tags that consist of descriptive text attached to comments or views; and 2) links that denote relationships between two comments or between a comment and a specific visualization state or view. The resulting structure can help analysts navigate, organize, and synthesize the comments, and move beyond exploration to more complex analytical tasks. (2) Evaluate this system: “how a small, fixed vocabulary of tags (question, hypothesis, to-do) and links (evidence-for, evidence-against) can help analysts collect and organize new evidence, identify important findings made by others, and synthesize their findings” and “establish common ground”.
  3. Methods: (1) present technical details of the design of this system, and usage scenario (2) evaluate by two controlled user studies and a live deployment comparing CommentSpace with a similar system that doesn’t support tags and links.
  4. Main findings: (1) A small, fixed vocabulary of tags and links helps analysts more consistently and accurately classify evidence and establish comment ground. (2) Managing and incentivizing participation is important for analysts to progress from exploratory analysis to deeper analytical tasks. (3) Tags and links can help teams complete evidence gathering and synthesis tasks and that organizing comments using tags and links improves analytics results.
  5. Analysis: (1) This paper is from the “garden” of information visualization and visual analytics. This line of work (collaborative visual analytics) is drawn from and expanding into CSCW and social media research. Because computing systems are eventually serving people within their social contexts, also because of the popularity of the web, many technical systems are implemented on the web and thus seek to support people, their communication and collaboration. I see this emerging converging point between social media and visualization techniques, but there are still huge discrepancies in the way of thinking and doing among researchers in different disciplines (esp. computer science and social science). Traditionally, the way of conducting user studies in technical world usually lack of rigor or depth. “It was almost a joke in some technical domains that reviewers of papers just need to check the mental box of the existence of user studies without considering the quality”. Large part of those papers are dedicated to “fancy algorithms”. The future of social computing calls for close collaboration between computer scientists and social scientists, further more engineers, artists and designers. (2) This paper is related to my project of integrating user participation in rating, tagging and commenting academia papers. CommentSpace is designed as a modular softare that can run  in conjunction with any interactive visualization system or website that treats each view of the data as a discrete state, so maybe I am looking forward to adopt it or some elements of it to my project in the future.