Reading Reflection: Crowdsourcing

I’ve been trying to investigate how to leverage the power of crowdsourcing to bring insight into understanding an emerging academia field. I put academic papers of Engineering Education Research on Amazon Mechanical Turk and ask turkers to read and tag them. From the result of a pilot test, I feel the foreseeable outcome probably will not be very optimal. This week’s reading provide some points for me to think about why this happened and where to go for my future work.

Part 1 in Surowiecki’s book “The Wisdom of Crowds” identifies 4 conditions that characterize wise crowd: diversity, independence, decentralization and aggregation. Based on this and other readings, I would like to add more conditions under which good results could happen using crowdsourcing. Only when most (if not all) the conditions are satisfied, the collective effort of the crowd will excel. I also add thoughts here that are particularly useful for my project and maybe also relevant to the readers of this blog post who is interested in crowdsourcing.

  • Diversity. The crowd has to be diverse enough to provide all the information for solving the particular problem. It doesn’t need and it’s also impossible to be diverse in all regards (ethnicity, gender, economic status, skills, expertise, religion, etc.) In the Maze Experiment mentioned in page6 of “The Wisdom of Crowds”, because the group has gone through the maze once, so collectively, they have a nearly complete picture of the maze, that is, the group contains enough information for accomplish the task optimally the second time. So as long as the crowd is diverse enough to give complete information for the particular task, it doesn’t matter whether they come from different gender, race, or hold different worldview, etc, though diverse worldviews does matter in may realistic problems. So when using crowdsourcing, I have to consider whether the possible crowd that will come to solve my problem have the complete information to offer.
  • Balance (I added this one, and it is a derivative from Diversity). The crowd also has to be diverse or balance enough to cancel the errors out in the sum of information. In the submarine Scorpion example, the officer recruited mathematicians, submarine specialists and salvage men. These people are all experts to some extent. They all have some pieces of information that is valuable to the problem but they may not know themselves. If there are too many irrelevant people with absolute no information to offer in this team, then these people will just offer many errors that cannot get canceled out aggregately. For my project, when outsourcing people from non-academia, I might include too many people that have absolute no information to offer besides errors. Compared with the general crowd, the Engineering Education Research community is too small. I might have a very small amount of experts, and too large body of non-experts, which makes the crowd imbalanced.
  • Independence. This might be the point where the book “The Wisdom of Crowd” receives most critiques. The opponents argue that it is impossible for people in real world to be totally independent, and not influenced by the social environment, so in this regard, the wisdom of crowd has no practical value. Not to mention all the successful cases happened in real-world, Amazon Mechanical Turk is a good place where turkers finish tasks without mutual influences. However, independence is not always a good thing especially for complex tasks. The CrowdForge paper (Kittur, Smus, &Kraut, 2011) says that “workers generally complete tasks independently with no knowledge of what others have done, making it difficult to enforce standards and consistency”. So usually the micro-task markets can only help to accomplish simple low complexity tasks that require low cognitive effort.
  • Decentralization (not having much thought yet).
  • Aggregate (not having much thought yet).
  • Carefully designed tasks: The problems or questions being asked to the crowd have to be carefully designed, especially for complex tasks. In the submarine Scorpion example, the officers didn’t ask the team where they thought the Scorpion was, rather, he concocted a series of scenarios about what might have happened to Scorpion. However, uncarefully breaking down the task into small pieces will cause high coordination cost later. Based on organizational behavior literature and distributed computing literature, the CrowdForge paper (Kittur, Smus, &Kraut, 2011) provides good argument and guidelines about why and how to separate complex tasks to small pieces suitable for micro-markets, where the workers attention span is very short and they don’t make the commitment to do long and complex task.  For my project, the academic papers I put on Amazon Turk are usually over 10 pages. So I have to come up with some ways to carefully break the task down. The CrowdForge paper provide good guidelines, but I have to come up with some ways that’s suitable for my situation.
  • Motivation matters. Various types of motivations are discussed in the reading. What I am thinking here is why the task I put on Amazon Turk about engineering education would matter to the general workers. According to the Crowd and Community (Haythornthwaite, 2009) paper, what I am doing here is blending the line between the two organizing models: heaveyweight community and lightweight crowd. I try to make the crowd contribute content that is useful to the community. Then why the crowd would like to do that if this doesn’t matter to them? It would be beneficial to the community if I make this work, then I can leverage both the power of the community and the crowd, but I have to carefully think about how to do it.

Overall summary: Crowdsourcing is not an internet buzzword. It is not an easy and cheap solution to all the problems you have. Good results happened under certain conditions. Bad results like crowdslapping can happen here and there. The process has to be considered carefully. Crowdsourcing method has to be used wisely.

Question to the readers: do you think there are ways that I can make the crowd work for the community (tagging academic papers), and work well? or there’s simply no way?

Advertisements

Crowdsourcing definition?

Wired 14.06: The Rise of Crowdsourcing.

I’ve been looking for literature about crowdsourcing, collective intelligence, and socially distributed cognition. This above one seems the earliest source (2006) about the term “crowdsourcing”, where the author tries to distinguish crowdsourcing with outsourcing.

I was wondering what is the definition of crowdsourcing? Are actually all web-based studies and businesses crowdsourcing? The confusing point is whether the tasks distributed online should be pieces of a larger achievement. Amazon Mechanical Turk is this way, and this website (http://www.crowdsource.com/) also claims this way (because it uses Amazon Turk):  crowdsourcing is first to devided the task into small tasks.

However, website like www.innocentive.com is also claimed to be crowdsourcing site, where they basically looking for experts to compete on planning or finishing the whole task, at a price cheaper than regular but still way more expensive than Amazon Mechanical Turk. Here, the tasks wouldn’t be divided into small pieces. Is this one more qualified as “expert sourcing” rather than crowdsourcing? Then what is the difference between expert sourcing and crowdsourcing, are they totally exclusive or have some overlap? Here is a blog about expert sourcing and crowdsourcing: http://itstrategyblog.com/whats-better-crowd-sourcing-or-expert-sourcing/ In terms of choosing expert sourcing or crowd sourcing, the suggestion here is: “Bottom line is if the idea is evolutionary, then crowd sourcing is just fine. If the idea is revolutionary then expert sourcing is a must.  In my opinion, the term “expert sourcing” shouldn’t be used here, they should rather use terms such as “expert hiring” or something, because mostly this article talks about one specific expert and the crowd.

In sum, my opinion is that Amazon Mechanical Turk is crowdsourcing, InnoCentive is expert sourcing (sourcing experts from the crowd), and the regular cases of using a specific expert is expert hiring that is not using any of the web-based recruiting methods. This is unfinished, I am still a bit confused myself.  I definitely would like to position my project for TECH621 related to this, but not sure yet.