Research:Newsletter/2022/September
Vol: 12 • Issue: 09 • September 2022 [contribute] [archives]
How readers assess Wikipedia's trustworthiness, and how they could in the future
By: Tilman Bayer
"Why People Trust Wikipedia Articles: Credibility Assessment Strategies Used by Readers"
[edit]OpenSym 2022, "the 18th International Symposium on Open Collaboration" took place in Madrid earlier this month. While the conference had started out back in 2005 as WikiSym, focused on research about Wikipedia and other wikis, this year only a single paper in the proceedings covered such topics - but won the conference's "OSS / OpenSym 2022 Distinguished Paper Award". In the abstract[1] the authors summarize their findings as follows:
"Through surveys and interviews, we develop and refine a Wikipedia trust taxonomy that describes the mechanisms by which readers assess the credibility of Wikipedia articles. Our findings suggest that readers draw on direct experience, established online content credibility indicators, and their own mental models of Wikipedia’s editorial process in their credibility assessments. "
They also note that their study appears to be "the first to gather data related to trust in Wikipedia, motivations for reading, and topic familiarity from large and geographically diverse set of Wikipedia readers in context–while they were actually visiting Wikipedia to address their own information needs."
The research project (begun while one of the authors was staff member at the Wikimedia Foundation) first conducted two online surveys displayed on English Wikipedia readers in early 2019, asking questions such as "How much do you trust the information in the article you are reading right now?." Among the topline results, the researchers highlight that, consistent with some earlier readers surveys
"Overall, respondents reported a very high level of trust in Wikipedia. 88% of respondents to the first survey reported that they trusted Wikipedia as a whole "a lot" or "a great deal". 73% of respondents to the second survey reported that they trusted the information in the article they were currently reading "a lot" or "a great deal" (94% in the first survey 6 ). In contrast, less than 4% of respondents in the second survey reported distrusting the information in the current article to any degree."
Survey participants were also asked about their reasons for trusting or distrusting Wikipedia in general and the specific article they had been reading when seeing the survey invitation. The researchers distilled these free-form answers into 18 "trust components", and present the following takeaways.
The four components that respondents find most salient (highest agreement) relate to the content of the article: assessments of the clarity and professionalism of the writing, the quality of the structure, and the accuracy of the information presented. The next four highest-ranked trust components focus on one aspect of the article’s context, the characteristics of the article writers: their motivations (to present unbiased information, fix errors, help readers understand) and their perceived domain expertise. Intriguingly, readers do not seem to consider the "wisdom of the crowd" to be a particularly salient factor when making credibility assessments about Wikipedia articles: the three lowest-ranked trust components all relate, in one way or another, to the relationship between crowdsourcing and quality (search popularity, number of contributors, and number of reviewers). This finding suggests that, at least nowadays, reader trust in Wikipedia is not strongly influenced by either its status as one of the dwindling-number of prominent open collaboration platforms, or its ubiquity at the top of search results.
In a third phase (detailed results of which are still to be published on Meta-wiki), a sample of survey participants were interviewed more in-depth about their previous answers, with the goal of "gain[ing] a deeper understanding into the factors that mediate a reader’s trust of Wikipedia content, including but not limited to citations." Combining results from the interviews and surveys, the researchers arrive at a refined "Taxonomy of Wikipedia Credibility-Assessment Strategies", comprising 24 features in three overall categories: "Reader Characteristics" (e.g. familiarity with the topic), "Wikipedia Features" (e.g. its "Pagerank" or its "Open Collaboration" nature), and "Article Features" (e.g. "Neutral Tone", "Number of Sources").
Lastly, the paper offers some more speculative exploratory analysis results "to spark discussion and highlight potential areas of future research":
- "Although the correlation is weak, [one] finding could indicate that readers have a higher threshold for trust when they require an in depth understanding of an article’s topic vs. learning a quick fact contained within the article."
- "We found a (weak) positive relationship between a respondent’s trust in an article and the predicted ORES quality class of that article (Spearman’s Rho 0.067, n=1312, p = 0.014). This provides additional evidence that readers are able to accurately assess the general quality of the article they are reading, and that content-related factors do inform their credibility assessments."
- "On average, trust was highest among respondents in India and Germany and lowest in Canada and Australia, although a large variability in sample size between countries suggests caution in over-interpreting these results."
"Templates and Trust-o-meters: Towards a widely deployable indicator of trust in Wikipedia"
[edit]This paper, presented earlier this year at the ACM Conference on Human Factors in Computing Systems (CHI)[2] opens by observing that
"[...] despite the demonstrable success of Wikipedia, it suffers from a lack of trust from its own readers. [...] The Wikimedia foundation has itself prioritized the development and deployment of trust indicators to address common misperceptions of trust by institutions and the general public in Wikipedia . [... Previous attempts to develop such indicators] include measuring user activity; the persistence of content; content age; the presence of conflict; characteristics of the users generating the content; content-based predictions of information quality; and many more. [...] However, a key issue remains in translating these trust indicators from the lab into real world systems such as Wikipedia."
The study explored this "'last mile' problem" in three experiments where Amazon Mechanical Turk participants were shown versions of Wikipedia articles modified by artificially adding warning templates (both existing ones and a new set designed by the authors, in several difference placements near the top of the page), and lastly by "a new trust indicator that surfaces an aggregate trust metric and enables the reader to drill down to see component metrics which were contextualized to make them more understandable to an unfamiliar audience." Participants were then asked various questions, some designed to explore whether they had noticed the intervention at all, others about how they rated the trustworthiness of the content.
Three of the 9 existing warning templates tested produced a significant negative effect on readers' trust (at the standard p=0.05 level):
"As expected, several of the existing Wikipedia templates significantly influenced reader trust in the negative direction. This is unsurprising, as these templates are designed to indicate a serious issue and inspire editors to mobilize. The remaining templates, ‘Additional citations’, ‘Inline citations’, ‘Notability’, ‘Original Research’, ‘Too Reliant on Primary Sources’ and ‘Too Reliant on Single Source’ did not result in significant changes. It is possible that the specific terms used in these templates were confusing to the casual readers taking the survey. Particularly strong effects were noted in ‘Multiple Issues’ (-2.101; ‘Moderately Lowered’, p<0.001), ‘Written like Advertisement’ (-1.937, p<0.001), and ‘Conflict of Interest’ (-1.182, p<0.05)."
Four of the 11 notices newly created by the researchers also significantly affected trust: "The strongest negative effects were found in ‘Editor Disputed References’ (-1.601 points from baseline, p<0.001), ‘General Reference Issues’ (-1.444, p=0.002), ‘Tone and Neutrality Issues’ (-1.184, p=0.012), and ‘Assessed as Complete’ (-1.101, p=0.017)."
There was also strong evidence for "banner blindness", e.g. in one experiment
"The percentage of readers who had not seen the intervention completely was 48.5%. We found this surprising, as our notices (including existing Wikipedia templates) were placed in a high visibility location where current Wikipedia templates reside and multiple task design elements were put in place to help participants focus on them."
In the third experiment, readers were shown articles first without and then with the newly designed trust indicator, which displayed various quantitative ratings (e.g. "Quality rating: official evaluation given by reputable editors", "Settledness: length of time since significant edits or debates"). They were told that it "shows the trustworthiness score of the article, calculated from publicly available information regarding the content of the article, edit activity, and editor discussions on the page", and then asked to rate the article's trustworthiness again (among other question). This resulted in
"reliable increases in trust at top indicator levels [...] This suggests that a trust indicator can provide system designers with the tools to dial trust in both positive and negative directions, under the assumption that designers choose accurate and representative mappings between indicator levels and article characteristics."
Interestingly, neither of the two studies about Wikipedia readers' trust reviewed above appears to have been aware of the other research project's findings, even though both were at least partly conducted at the Wikimedia Foundation.
Wikimedia Research Fund invites proposals for grants up to $50k, announces results of previous year's round
[edit]Until December 16, the Wikimedia Foundation is inviting proposals for the second edition of its Wikimedia Research Fund, which provides grants between $2k and $50k "to individuals, groups, and organizations with research interests on or about Wikimedia projects [...] across research disciplines including but not limited to humanities, social sciences, computer science, education, and law."
This is the second edition of the research fund, whose inaugural edition had closed for submissions in January 2022. Earlier this month, the Wikimedia Foundation also publicly announced funding decisions about proposals from this 2021/2022 edition, and published the full proposal texts of finalists (while inviting the community to "review the full proposal"). The funded proposals are:
- "The Impact of Wikipedia on Science" (January 2022 to December 2022, 40,000-50,000 USD)
- "Slow Editing towards Equity" (June 2022 – June 2023, 30,000-39,999 USD)
- "Social and Language Influence in Wikipedia Articles for Deletion Debates" (March 2022 - August 2022, 5,000-9,999 USD)
- "Wikidata Gender Diversity (WiGeDi)" (01.06.2022 – 31.05.2023, 40,000-50,000 USD)
- "Adapting Wikidata to support clinical practice using Data Science, Semantic Web and Machine Learning" (1 August 2022-31 July 2023, 49,867.21 USD)
- "From the media to Wikipedia: the relationship between Chilean media news and malicious edits made in the virtual encyclopedia during the Social Outbreak of Chile in 2019." (April 2022 - April 2023, 5,000-9,999 USD)
- "Can Machine Translation Improve Knowledge Equity" (June 2022 ~ May 2023, 40,000-50,000 USD)
- "Using Wikipedia for educational purposes in Estonia: The students′ and teachers′ perspective" (June 1, 2022 - May 31, 2023, 30,000-39,999 USD)
- "Grounding NPOV pillar in post-censored information ecology" (September 1st, 2022 - May 1st, 2023, 10,000-19,999 USD)
Other recent publications
[edit]Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.
"Accounts that never expire: an exploration into privileged accounts on Wikipedia"
[edit]This study[3] found that a 2011 English Wikipedia policy change to remove the rights of inactive administrators did not reduce the (already low) frequency of admin accounts being compromised.
References
[edit]- ↑ Elmimouni, Houda; Forte, Andrea; Morgan, Jonathan (2022-09-07). "Why People Trust Wikipedia Articles: Credibility Assessment Strategies Used by Readers". Proceedings of the 18th International Symposium on Open Collaboration. OpenSym '22. New York, NY, USA: Association for Computing Machinery. pp. 1–10. ISBN 9781450398459. doi:10.1145/3555051.3555052.
- ↑ Kuznetsov, Andrew; Novotny, Margeigh; Klein, Jessica; Saez-Trumper, Diego; Kittur, Aniket (2022-04-27). "Templates and Trust-o-meters: Towards a widely deployable indicator of trust in Wikipedia". CHI Conference on Human Factors in Computing Systems. CHI '22: CHI Conference on Human Factors in Computing Systems. New Orleans LA USA: ACM. pp. 1–17. ISBN 9781450391573. doi:10.1145/3491102.3517523.
- ↑ Kaufman, Jonathan; Kreider, Christopher (2022-04-01). "Accounts that never expire: an exploration into privileged accounts on Wikipedia". SAIS 2022 Proceedings.
Wikimedia Research Newsletter
Vol: 12 • Issue: 09 • September 2022
About • Subscribe: Email •
[archives] • [Signpost edition] • [contribute] • [research index]