1. APA Citation:
1. APA Citation
Social Discovery: the collaborative processes that promote creating capabilities and seeking solutions. “In its most ambitious form social discovery is the detection of new and important relationships, patterns or principles that advance disciplines and make valuable contributions to society.”
Propose a framework of design of search tools that support social discovery.
Based on previous theories of information seeking, work in CSCW and the Reader-to-Leader framework of social participation.
Figuire1. The Reader-to-Leader framework suggests that the typical path for social media participation moves from reading online content to making contributions, initially small edits, but growing into more substantive contributions. The user-gen- erated content can be edits to a wiki, comments in a discussion group, ratings of movies, photos, music, animations or videos. Collaborators work together over periods of weeks or months to make more substantial contributions, and leaders act to set policies, deal with problems, and mentor new users.
5. Main Findings
(1) The shift of searching tools: specific fact-finding–> open-ended exploratory search –> social discovery (collaboration in creating capabilities and seeking solutions)
(2) The Social Discovery framework. “The social discovery concept extends the ideas from the creativity and discovery support tools based on information visualization, team coordination and design tools.” It emphasize not only the information seeking, but also participation and creativity. “Valuable contributions also come from those who tag, taxonomize, comment, annotate, rank, rate, review and summarize.”
Figure 2. The Social Discovery framework describes the two stages of work: creating capacity and seeking solutions. These are carried out by a dialog between those who initiate requests and those who provide responses over a period of weeks and months.
5. Analysis: This is a theory paper from a computer scientist. Ben Sheinderman is a big figure in HCI and Information Visualization. He advocates the revolution of science to science 2.0 to really consider real social problems utilizing the web. This is a framework or guideline on design of computation tools that better support human and their social interaction in the processing of searching for knowledge. It is set to “promote thinking about and conducting research into the mechanisms that facilitate social discovery”. It mentions that “the implications are profound for academic, industrial and government researchers, since they force re-consideration of reward structures, especially for creating capabilities, which deserve more recognition in tenure or promotion reviews.” I am excited to see a big figure in computer science really embraces the idea of social media to do good for our humanity, and admire a lot of his thoughts and insight.
- APA Citation:
Willett, W., Heer, J., Hellerstein, J., & Agrawala, M. (2011). CommentSpace: structured support for collaborative visual analysis. Proceedings of the 2011 annual conference on Human factors in computing systems (pp. 3131–3140). PDF
- Purpose: (1) Present details of a web-based collaborative visual analysis system CommentSpace that aims to help users better make sense of the visualizations through synthesizing others’ comments. CommentSpace “enables analysts to annotate visualizations and apply two additional kinds of structure: 1) tags that consist of descriptive text attached to comments or views; and 2) links that denote relationships between two comments or between a comment and a specific visualization state or view. The resulting structure can help analysts navigate, organize, and synthesize the comments, and move beyond exploration to more complex analytical tasks. (2) Evaluate this system: “how a small, fixed vocabulary of tags (question, hypothesis, to-do) and links (evidence-for, evidence-against) can help analysts collect and organize new evidence, identify important findings made by others, and synthesize their findings” and “establish common ground”.
- Methods: (1) present technical details of the design of this system, and usage scenario (2) evaluate by two controlled user studies and a live deployment comparing CommentSpace with a similar system that doesn’t support tags and links.
- Main findings: (1) A small, fixed vocabulary of tags and links helps analysts more consistently and accurately classify evidence and establish comment ground. (2) Managing and incentivizing participation is important for analysts to progress from exploratory analysis to deeper analytical tasks. (3) Tags and links can help teams complete evidence gathering and synthesis tasks and that organizing comments using tags and links improves analytics results.
- Analysis: (1) This paper is from the “garden” of information visualization and visual analytics. This line of work (collaborative visual analytics) is drawn from and expanding into CSCW and social media research. Because computing systems are eventually serving people within their social contexts, also because of the popularity of the web, many technical systems are implemented on the web and thus seek to support people, their communication and collaboration. I see this emerging converging point between social media and visualization techniques, but there are still huge discrepancies in the way of thinking and doing among researchers in different disciplines (esp. computer science and social science). Traditionally, the way of conducting user studies in technical world usually lack of rigor or depth. “It was almost a joke in some technical domains that reviewers of papers just need to check the mental box of the existence of user studies without considering the quality”. Large part of those papers are dedicated to “fancy algorithms”. The future of social computing calls for close collaboration between computer scientists and social scientists, further more engineers, artists and designers. (2) This paper is related to my project of integrating user participation in rating, tagging and commenting academia papers. CommentSpace is designed as a modular softare that can run in conjunction with any interactive visualization system or website that treats each view of the data as a discrete state, so maybe I am looking forward to adopt it or some elements of it to my project in the future.
1. APA Citation: Lops, P., Gemmis, M., Semeraro, G., Musto, C., Narducci, F., & Bux, M. (2009). A Semantic Content-Based Recommender System Integrating Folksonomies for Personalized Access. In G. Castellano, L. C. Jain, & A. M. Fanelli (Eds.), Web Personalization in Intelligent Environments (Vol. 229, pp. 27-47). Berlin, Heidelberg: Springer Berlin Heidelberg. Retrieved from http://www.springerlink.com.login.ezproxy.lib.purdue.edu/content/e6603207661850m7/
2. Problem Statement: Traditional content-based personalization (recommender system) is usually driven by string (keywords) matching operations, i.e., to match attributes of user profiles with the attributes of content objects. This method is unable to capture the semantics of the user interests behind the keywords.
3. Purpose of the Research: This paper aims at improving recommender system by incorporating user ratings and tags, and also utilizing semantic analysis techniques derived from research in Information Filtering, Machine Learning, and Natural Language Processing.
4. Research Question: Does the integration of tags cause an increase of the prediction accuracy in the process of filtering relevant items for users?
5. Some Definitions: Static Content, SocialTags(I), PersonalTags(U,I)
Static Content: compared with the dynamic tags provided by the users, the title, author, original descriptions of the items are static content.
SocialTags and PersonalTags: Given an item I, the set of tags provided by all the users who rated I is denoted as SocialTags(I), while the set of tags provided by a specific user U on I is denoted by PersonalTags(U, I).
(1) A system called FIRSt (Folksonomy-Based Item Recommender syStem) is designed integrating semantics analysis algorithms and User Generated Content (interest ratings and tags). FIRSt allows users to give an interest rating (1-5, 1=strongly dislike, 5=strongly like ) and free tags to museum paintings.
(2) A user experiment is conducted. 30 non-experts and 10 experts are recruited to rate and tag 45 paintings chosen from the collection of the Vatican picture-gallery. The participants are selected according to availability sampling strategy. The 30 non-experts are from young people having a master degree in Computer Science or Humanities, while experts are teachers in Arts and Humanities disciplines. They did statistics evaluation using classification accuracy measures (precision and recall) to investigate whether using only the personal tags or the whole set of social tags are more accurate to recommendation, and whether expertise will influence the accuracy of recommendation.
7. Main findings:
(1) The highest overall accuracy is reached when user preference learned from both static content and personal tags (not social tags) are exploited in the recommendation process.
(2) The expertise of users contributing to the folksonomy does NOT actually affect the accuracy of recommendations.
8. Analysis: This paper is very related to our research team’s current and future work in building the personalization and collaboration system for engineering education community.
(1) The conclusion here that expertise doesn’t matter is interesting. This recommendation system is mainly for recommendation of artworks, movie, music and books. If the items being expanded to others such as academic papers, I am afraid that expertise will matter.
(2) My current work is designing the UI that allow the users to specify interest ratings, write tags and comments to papers. For one things, this will driven the personalization in My Library and My Gallery (the two personalized components in our system). For another, we also thinking about ask the users to specify quality ratings of the paper which could contribute to the whole community in making sense of academic papers in this field.
(3) I have a deep concern about the meaning of personalization and recommendation, that is, the filter bubble, or social segmentation, or the echo chamber. We only see things that’s recommended to us by the machine, and being isolated from the world outside of our world.
1.The paper analyzed here:
Welser, H. T., Gleave, E., Barash, V., Smith, M., & Meckes, J. (2009). Whither the experts? Social affordances and the cultivation of experts in community Q&A systems. 2009 International Conference on Computational Science and Engineering (pp. 450–455). PDF
Social affordances: refers to the properties of an object or environment that permit social actions. (Wikipedia)
Community-based online Q&A: Online Q&A services can be divided into digital reference services, expert services, and community Q&A sites (Harper, 2008). The focus of this paper is community-based systems like Yahoo! Answers, and Live QnA.
Expert: this paper employs a behavioral definition of social roles: experts are contributions who provide technical and factual answers for the majority of their contributions, typically 80% of their messages or more.
Technical answers: technical answers are explanations or descriptions of courses of action. Technical answers are instructive and will often define terms and connect to resources that aid in the solving of some problem or task.
Factual answers: a factual answer provides a statement of fact that can potentially be verified, in general, and is not dependent upon the identity of the author.
Opinion: opinion is one type of non-answer contributions. Opinion provides an assessment, evaluation, or judgement about something. The opinion provided in the response is not used to provided support guidance, or advice; it simply involves a stated opinion.
3. Purpose of the research: This article compare the Live QnA and USENET as examples of community-base Q&A systems to answers the two questions: (1) To what extent do these systems foster experts who provide technical and factual answers? (2) Which social affordances of these systems encourage or discourage the cultivation of expertise and the performance of the experts role?
4. Method: This paper takes a mixed-method approach. They did a content analysis of 5,972 messages from 288 contributors to Live QnA (10% of the contributors who contribute 95% of the messages); Compared Live QnA to USENET; Identified some issues on Live QnA. Then they take a qualitative approach: generate a series of expectations about how social affordances are likely to alter the role ecology of online systems based on previous literature.
5. Major findings: On Live QnA, they found that none of the sampled contributors dedicate 80% or more of their contributions to technical and factual answers. According to the definition of “experts” in this article, there is no expert on Live QnA. Only 1.3% of the sampled contributors dedicate 60% or more of their contributions to technical and factual answers. 23% of the sampled contributors dedicate 60% or more of their contributions to opinions. On the contrary, in a sample from USENET, 52% of the contributors can be classified as experts, while only 3% of contributors dedicate 60% or more of their contributions to opinions. This means the social affordances on USENET cultivate more experts than Live QnA. The authors then describe connections within a general theoretical framework for modeling the behaviors of individuals, the setting they are in, and how different social outcomes are likely to emerge from the combination of the attributes of the social setting and the goals and actions of individuals:
A. Emergence of collective outcomes. The emergence of collective outcomes is due to a combination of social affordances, individual goals, and social actions. There is no direct link between social affordances and the outcome, so when we design a system, we can not simply assume certain kind of design will result in certain kind of outcome, we have to take into consideration of individual goals and social actions.
B. Reputation, reactions, and roles. One example here is that if reputation scores are based on quantity rather than quality of posts, the system is likely to tend towards trivial contributions, one possible shortfall of the Live QnA system.
C. Context, boundaries and tags. One possible interpretation of this finding is that the content of a newsgroup acts as a set of unwritten norms that, over time, encourage the posting of similar content and discourage the posting of very different content. Contextual boundaries seem to be fuzzier in Live QnA than in USENET.
D. Reinforcing norms of dedicated contributors. This raises the general issue of how implicit and explicit reputation systems alter how people behave and can result in different online role ecologies.
- The overall structure of this paper is very clear, but many details are quite fuzzy, especially the qualitative section. The authors seems to repeat similar but different concepts here and there, and use many different vocabularies which I am not sure whether they refer to the same things or not. So I don’t how to summarize many of the conclusions. Some sentences structures are quite odd. For example, in the data collection section, it says “We divided our sample equally between selections from the top ten percent and one percent activity levels as defined by the number of messages (questions, answers or comments) posted”. I don’t understand what “equally” mean here? how could that be “equally”? Also, the “as defined” part makes the sentence very long and difficult to read.
- This paper talks about social affordances, but didn’t give clear definition of social affordances. In the qualitative section, it talks about connections between social affordances and role ecologies, but what exactly are the social affordances specified here is unclear to me. What is the distinction of social affordances and other characteristics of the system is unclear.
- This article is helpful to me to understand the culture of online community. I learned the things I have to take into consideration when designing system to attract experts’ contribution. This paper can connect to last week’s Tech621 readings about crowdsourcing and heavyweight community. From the surface, community based Q&A is very much like the first type of crowdsourcing (e.g. InnoCentive), however, one major difference is that InnoCentive-type of crowdsourcing (the two types of crowdsourcing need names, I will call them InnoCentive-type, and MTurk-type) is lightweight, while community based Q&A is more heavyweight (Haythornthwaite, 2009). Or, in some cases, in between.
- One limitation of this paper is that Live QnA is a small size QnA, thus may not be a representative sample for most community QnAs.
- This paper suggests future directions: (1) test the connections between social affordances and role ecologies proposed by this paper. (2) comparative study of systems vary primarily of such social affordances. (3) study of reputation system and contexual boundaries. (4) how temporal constraints (each thread on Live QnA has a 4-day life span) affect the quality of data repositories and the roles emergent in them.