Mass Collaboration Workshop, Day Two

(My presentation will appear separately)

Collective Knowledge in Social Tagging Environments
Joachim Kimmerle, KMRC


Even though it is hard to find good definitions of knowledge, most psychologists would agree that knowledge is an internal mental representation of external environments. This may seem to be contradictory to the idea of 'collective knowledge', but the point of this presentation will be that the concept makes sense. In collective knowledge, large groups of people externalize their representations into digital artifacts. An example is social tagging networks.

Background: there is a huge quantity of information on the web - this makes it hard for users to find the best resources for adequate navigation, but at the same token, the web can trigger learning. So in our work we examined the potential of social tagging, and the impact of individual and collective knowledge on social tagging systems.

Prior to the experiement, literature background: Information Foraging Theory (2007, Fu and Pirolli) describes how individuals select links and forage for information. Users have to choose between different links and navigation paths. The 'information scent' is the perceived usefulness of the resource. This information is scent is based on 'the semantic memory' (my quotes - SD). There are cognitive models of semantic memory - eg. Anderson, 1983; Collins and Loftus, 1975. Chunks are connected to other chunks; connections may have different strengths.

Also, some background on tagging: this is the practice of annotating resources with keywords (aka 'tags') on sites like Delicious, Flickr, etc. People use tags in order to structure, organize and refind resources. Social tagging aggregates the tags of all users (Trant, 2009; Vander Wal. 2005). The resulting collective tag structure represents the collective knowledge. Note that coordination here is not really needed, in contrast to other systemss of mass collaboration. The tags establish a network of connections among the resources and the tags, and among the resources themselves, and the users who read them. These associations are represented in 'tag clouds', in which the font represents the strength of the links in the tags.

In our experiments - interested in independent variables (individual strength of association, collective strength of association) and dependent variables (navigation, incidental learning). The topic in question: wine from the country of Georgia (this topic is chosen in order to prevent people from having preconceptions about the topic). (Surprisingly, 10% indicated they had prior knowledge on this topic, so they were not used in the experiment). We measured hiow people tagged, and how they clicked on the tags, and what region of Georgia they would select if they wanted typical wine from Georgia. People tended to click on the larger tag (ie., the tag with the higher association strength). (Some discussion here of whether they were just clicking on the biggest links.)

Spreading activation theory: the activation of one chunk leads to the activation of associated chunks. (Meyer & Schvaneveldt, 1971). This was the subjected of a secondary piece of knowledge. Again people were recruited via Mechanical Turk and eliminated people who had prior knowledge of Georgian wine. The secondary association was based on wine colour (specifically, 'white wine') and the question was, was white detected as 'typically Georgian', and aromas associated with white and non-white wines. Again, people selected the bigger link.

So: individual and collective associations are both relevant. Navigation and learning are linear combinations of both types of associations. And (as a conseuqence?) People internalize collective knowledge - they do not use it only to select which links to click, but they seem to acquire some knowledge about the topic.

Comment: but how is this different from just reading a text? And what is the role of agency? Response: I don't think it is completely different - when you read an article, I understand some, I don't understand some, I try to use it - I would lable this a collaborative process. The 'collective' aspect of social tagging basically comes from what the technology does.



A socio-cognitive approach for studying and supporting interaction in the social web
Tobias Ley


We want to talk about how we can make massive social network data work for us. The focus here will be a tagging system, and in particular a system for recommending tags.

We need a good understanding of the cognitive mechanisms involved in producing and consuming the data, and you need system affordances that facilitate its use. These affordances describe the coupling points between humans and machines. They have typically been studied at the individual level (eg., the door handle) - it's a concept that sits between these things., ie., you have a door handle, but then you have a cognitive representation of it.

Affordances are socially constructed and can be created by aggregating social signals (eg., cowpaths). Similarly, in social systems, affordances can be the result of aggregated behaviour.

We want also to talk about what we can do to make these systems work better, eg., via recommendation systems.

This whole system can be viewed as a distributed cognitive system, an ecosystem of humans and artificial agents, where affordances co-evolve in the system. Cf. work on imitation in social tagging (eg., I see some tags, I decide to use them for myself - this is how some tags get popular and others don't get popular). Also - what is the role of memory in producing social tags - how are tags represented and processed in memory?

So, from the perspective of models of imitation, how does consensus emerge? A typical imitation mechanism is preferential attachment - you just copy a tag that has been used by someone else. Another mechanism is semantic imitation - you gdon't copy the word directly, but the tag creates a certain representational context in your mind, and you use these other concepts (eg., a tag 'book' leads you to use a tag like 'read').

A model for studying this is 'fuzzy trace theory' - you can be in different recall states when you have learned something. Sometimes you forget the words, eg. of a song, but you can remember the meaning. This can inform tag-based search - sometimes you learn the tag, sometimes you learn the gist (this is called the gist-trace). See Brainerd, et.al. 2010.

So - the experimental study: what role do verbatim and semantic processes play when imitating tags? And can these processes be disassociated using practically significant variables? The experiment uses the RTTT-Procedure (basically a way for people to tag the same photo several times).

Here's the model then, to dissociate verbatim and semantic imitation: if you learn the tag, you have 'direct access' to it, and you imitate it. Or you maay have no direct access, then you may either reconstruct the tag, possibly even the original tag, or another semantically relevant imitation; or finally, you may have no recall, in which case you are guessing. The model fits the data very well.

The results? The rate of semantic imitation was relatively constant at about 13%; verbatim imitation varied quite a bit (8% - 20%). Influencing factors included semantic layout of the tags, size of the tags, and connectivity of the tags.

A technology recommender system was developed based on the principles of this model. It basically figures out the sorts of tags you would use, so it can recommend them to you. It is based on a connectionist network with a hidden layer, where resources are encoded in terms of topics or categories (eg., Wikipedia page categories). The recommended learnss for each person all the tag assocciations they have done in the past, and then tries to match this pattern to all the different examples.

Can this algorithm guess which tags people will use? The algorithm, was superior to semantic categories extracted from existing tags, and approaches where you just choose the most popular tags, or a spreading-activation model. But we don't know the answer to the question yet - it would be interesting to apply to a real system. So, eg., a 'tagging in the real world' project: eg., tagging real-world objects in construction, health care - some examples of people tagging machines (with warnings, instructions, etc). Another project - 'My learning episode' - a sensemaking interface. http://developer.learning-layers.eu/tools/bits-and-pieces/RunningDemo

Future work includes the study of tagging processes - is it an automatic process, done out of habit, or is it a deliberate process, where you look at other tags and decide whether to resuse them or not. Also, how strong is the affordance character of social recommendations? And what is the influence of the physical environment.


Network analysis of mass collaboration in wikipedia and wikiversity
Iassel Halatchliyski


We're looking at long-term self-organizing processed based on stygmatic methods of coordinaation, where these knowledge artifacts have a network structure.

What is important from the theoretical background is the focus on the link between individual and collective knowlege. (Reefercne to a buncch of theories by title - complex systems, socio-cultural construction, situated learning, etc).

The approach is to use network analysis technics, metrics and algorithms, and apply them to networks of knowledge artifacts. Three studies.

Study 1: based on the assumption that the internal logic of knwoledge is reflected in the network structure of the artifacts. This leads to the exploration of the potential for modeling collaborative knwoledge through its networks structure. It was a crosss-sectional analysis of hyperlinked articles in Wikipedia. It asked the question, "what is the editing experiemce of authors who contributed to pivotal articles?"

So, eg., we have a network of two combined domainss - education and psychology - with about 8,000 articles and 2,000 articles. We look at boundary-spanning articles using a 'betweeness' measure of centrality, as well as the articles that are 'central' in each of the two domains. The experience of the authors from working on different articles in Wikipedia is related to how pivotal the articles are that they work on 0 in the long run, experienced articles create pivotal articles. The explanation is that in the long run, experienced authors will write pivotal articles that set the stage for new knowledge.

Study 2. How is the development of new knowlege related to the articles with a pivotal network position? The background here is based around preferential attachment (Barabasi and Albert, 1999) and the idea of a worl of ideas with their own licefyfle. This study followed the same network as it developed in Wikipedia over 7 eyars. 'New knowledge' in Wikipedia may be new articles or editss to existing articles.

Study 3. How do we identify pivotal contributions and moments in a discourse process? Discourse happens continuously over time and builds on previous contributions. This study used a scientometric method for quantitative studies of scientific work, which is used to identify the main network flows in a scientific literatire connected by citations. (Some diagrams shown illustrating the 'flow of ideas' through a reserach community).



Olga Slivko
Is there a peer effect in knowledge generation in productive online communities? The case of German Wikipedia


From an economic perspective, we look at interactions between indivoduals sharing existing resources to produce a common socially valuable output. What processes drive contributions to online communities> There's pure altruism, there's social image, there's reciprocity to peers, etc. So the question is, is there any social reciprocity / social interaction in contributionss to Wikipedia.

In Wikipedia there are differences from other social networks. There is need for coordination on a single page. There are no explicit friendship structures on Wikipedia. Indivisuals do not get a high 'reputation' on Wikipedia (so there are not potential monetary gains). So, do the 'peers' activity affect knowledge generation?

In previous reserach on peer interactoon, we find stong input by peers on group behaviour (eg., health-related attributes such as smoking, GPA and choice of major). And social ties matter for engagement into open source software development projects, online music, and video gaming networks.

Measurement: the utility of an individual contribution to wikipedia.

Network of editors: Editors are connected if they made a revision on a page within a 4-week spaan (these links can expire as well). We can construct networks of editors out of these connections (long 'standard' formula used to describe this).



Ben Shapiro
Where do we go from here? Teaching, mentoring, and learning in the games and crowdsourcing era


The point of philosophy is to change things (Marx)and maybe we should be looking at what we want mass collaboration (or cooperation?) to change things to.

We've heard of Wikipedia as a community of prctice - but what does that mean? Reference to Lave and Wenger and legitimate periphrial participation. Bryant, Forte and Buckman 2005 - role and goal changes from being on the outside to gradually becoming a core member.

So, vodeogames: Minecraft. It's very popular, but it's also very unstructured. A lot of people just use it to buld things. Another game: Civ V - you create civilizations, multiple ways to be victorious. People create multiple causal maps of how to be successful in a game.

Most of what's happening in a game community happens outside the game - people collaborate in ways the game environment doesn't allow you to. They create content collaboratively. World of Warcraft is another very popular game - with WoW you can't get very far in the game without working with other people. You have to apprentice with more serious players to get ahead.

It has also been explored how players plaaying warcraft are engaging in scientific practices and discourses, Eg. Steinkuehler & Duncan, 2007, 2008 - use of data and experiments in game play. But the hitch is they are engaging in these practices in make-believe worlds.

As designers we can do better, online environments that are as engaging as actual games, but embedded in real science.

There are some light-wight crowdsourcing games designed by scientists to collect data. Crowdsourcing: was of a group of people work together to solve a problem posed by an interrogator. Eg. Galaxy Zoo - players go to the site, they see a pictureof a galaxy, and they are asked to lable the picture - is it a spiral? Is it clockwise?

But this is an inauthentic scientific process - scientists themselves do not feel this work is worth doing as part of science (but they like getting other people to do it). Also, the players must be strictly independent - they cannot interact, because you don't want to taint the data.

Another game, Foldit. You create protein structures. You can work as teams. Questions: is this a collaboration (with scientists)? Is this a learning enviornment? Popovic (creator of Foldit) says they do learn, but the players gthemselves have no idea what the 'blue and orange thingies' do.

Learning is not the point. These are labour systems, commoditizing the work that the machines cannot yet do. But they don't help you learn, and they're not collaborative systems.

So, there's the Maker Movement. It's this convergence of DIY with open source software and electronics. We have to study this as a distributed activity system, the people have different goals. There's an outreach to bring people in - eg. Make Magazine, Maker Faire, etc. Also, online communities and sites. It happens in communities, and also purpose-built places (eg. Island Asylum) - you paay a fee to access eg. tools and such.

Right now it's the early phase where there's a lot of excitement and little study (this would be a good place to start studying).

So - what could the future of this thing look like. Eg., participatory discovery networks. We see pieces of this in things like citizen science projects - they look at distributions of bugs or count birds in trees. Or after Fukishima, when people built real-time radition monitoring.

So - a hypothetical example - how you might revolutionize medical imaging in developing nations. They need better diagnostic information - they have nobody to read the images. So, how do you enable people to build the hardware, how do you enable them to share it, and how do you have people around the world contribute.

We've been using a tool we developed called 'Blockey Talky' - build software by assembling blocks.

The online tools - eg. a Facebook plugin where you can access a CT scan where people could look at it, argue about what the image is, etc.

Imagine how the public could work together to imporve public health? Could we get people working to make things and at the same time develop enough education to make devices that are useful? Could we build communities aorund this - people doing first-pass analyses of things, which can be passed to people with more experience.

Creating participatory discovery networks that address real problems is something we can explore.



Language Technology for Mass Collaboration in Education
DIPF Frankfurt - UKP Lab
Iryna Gurevych


The motivation for natural laanguage processing in mass collaboraation: these is evidence that learned infromation in collaboraative learning is retained longer. There are instances of computer-supported collaborative learning, eg., discussion boards and wikis, computer-supported argumentation, and community-driven question answering.

These new forms of collaborative leanring bring some challenges along with them. They result in massive amounts of unstructured textual content - people expresisng their opinions, eg., and for humans it is impossible to process all of this content. Especially for learners, it is difficult to process, and difficult to assess for quality.

The current issues can be summarized as:
- knowledge is cattered across multiple ocuments /locations
- difficulty having an overview
- abundance on non-relevant or low-quality content
- platforms for collaboration do not provide intelligent tools supporting users in their information needs

(The specific issue is that learners because they are learners do not have the background knowledge)

Natural language processing is a key technology to address these issues - it enables users to find, extract, analyze, and utilize the information from textual data.

For example, one of the things that can be analyzed are 'edit-turn-pairs' - edits are fine-grained local modifications from a pair of adjaacent revisions to a page, and include deletion, insertion, modification, or relocation (a more detailed taxionomy was created). Turns are single utterances in the associated discussion pages, and again can be given metadata.

We asked: what kind of correspondance can be identified between edits and turns? What are the distincting propertiees of corresponding edits and turns that can be used to show they are correlated. How much knowledge in the article was actually created in the discourse on the discussion page?

(Example of an edit-turn pair)

Ferschke et al (2012) propose explicit performtive turns: 1. turn as explicit suggetsion, 2. turn as explicit reference, 3. turn as explicit commitment to edit in the future, 4. report of performed action. Other turns not part of this set are defined to be 'non-corresponding'. To find corresponding turns, Mechanical Turk was used to select corresponding turns. From 750 randomly selected turns, 128 corresponding turns were found.

Language processing processes were then used to analyze the turns. We find we can detect the non-correlated turns with a rate of 0.9 and correlated turns with a rate of 0.6.

Ivan Habernal (continuation of the same talk)

Argumentation mining is a rich and growing field. It includes stances and reasons and how argumentation is put forward and phenomena typical of argumentative discourse, eg., fallacies. Our research looked at controversial topics within the educational domain, mostly two-sided debated (eg., home-schooling, mainstreaming). The purpose was to enable people to support their personal decision and to give reasons for these decisions.

(Diagram of the whole things) - identification of topic, discovery of relevant posts, extraction of argumentative data, annotated argumentation.

So - we needed to create a corpus with which to feed our machine-learning algorithms - fror example, to identify persuasive on-topic documents (as most documents are not one or the other). This was a binary decision over 990 documents (comments to articles, forrums, posts), which obtained pretty good agreement (0.60).

Next, we are going deeper into the structure of argumentation in the documents. There are different schools to describe arguments from different perspectives. It was inspired by a model proposed by Toulmin (1858) which uses five concepts in logos dimensions: claim, grounds, backing, rebuttal, and refutation. There's also the pathos dimension, and appear to emotions. We wanted to find the corresponding text and associate them with the labels.

Challenges:
- the main message is ofteen not stated
- granularity of the concepts
- the very general challenge of analyzing argumentation that the users are not even aware of using

Results from this study, of 350 documents (sampled from the first phase) with 90K tokens, and found agreement for claims and grounds, less for others; for longer posts (eg. blog posts) we could not find agreement even for claims, grounds. The longer texts heavily rely on narratives, cite reserach articles, etc., and can hardly be captured by Toulmin's model.

Later this year: complex community-based question answering. Factoid questsions can easily answered by computer, but the 'why' questions are much more difficult. We would like to solve it by combining user ratingss, model answers, etc.

Conclusions: these are three examples of NLP technologies that can be used in mass collaboration in eductaion. These could be used to summarize arguments, and help students form their own arguments.


Question on 'indicator words' (or 'discourse markerrs') - they play a role, especially in well-written text. But in the social media discourse, these discourse markers were misused or missing.

Tool used: test classification framework (all open source tools).

Comments

Popular Posts