Research:Prioritization of Wikipedia Articles/Importance/SuggestBot
This page is currently a draft. More information pertaining to this may be available on the talk page. Translation admins: Normally, drafts should not be marked for translation. |
This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.
This experiment explored editors' willingness to balance personalization with content equity in edit recommender systems. Our area of study is SuggestBot, which is an open-source recommender system that helps match Wikipedia editors with tasks they might be interested in completing. Theoretically, recommendations can be made more fair at a global level (e.g., recommending biographical articles about women and men at more similar rates) with minimal loss to how relevant the recommendations are to those receiving them. We tested the effectiveness of this approach in the Wikipedia task-routing context by modifying SuggestBot’s algorithm to incorporate content-equity-aspects alongside the personalization, deploying the modified algorithm to a subset of SuggestBot’s recommendations, and comparing the uptake of these new recommendations with those that are generated using SuggestBot’s standard algorithm.
Methods
[edit]Our experiment ran from September 7th through December 31st, 2022. On average, about half of recommendations provided to the user were modified in some way. Our final dataset includes 39,990 recommendations (1,333 sets of 30) across 281 unique users. Specifically, we modified the recommendations in three ways:
- Boosting content about women and folks with non-binary gender identities
- Boosting content about emerging communities
- Boosting content about high-impact topics (WikiProjects matching UN SDGs)
For each updated recommendation, which by definition was lower in predicted relevance, we also included a control recommendation of similarly-reduced relevance (e.g., a lower-relevance biography of a man) to control for changes in relevance as a factor in the experiment.
Results
[edit]Overall, we found no significant differences in an editor's likelihood to edit a recommended article based on its relevance or identity within any of the content-equity facet being tested. While differences were statistically insignificant, we found a slightly elevated edit rate for articles that belonged to a content-equity facet compared to equal-relevance controls. This is excellent news suggesting great flexibility on the part of editors to edit a wider diversity of content. This was reflected in the descriptive statistics: edits to biographies shifted from a baseline of ~30% to 40.5% women and non-binary identities during the experiment; edits to emerging regions shifted from a baseline of ~22% to 29.3% of geographic articles during the experiment; edits to important topics shifted from a baseline of ~7.5% to 11.4% during the experiment.
Implications
[edit]This suggests that editors can easily be caught in filter bubbles where content gaps self-propagate because incidental editing and personalized recommendations are unlikely to steer editors towards content gaps that they would be happy to address. While this experiment took a top-down approach of defining specific topical areas to boost to allow for careful controlling and analysis, this has major limitations because many important topic areas cannot be easily tagged for inclusion in an algorithm and it can be challenging to set the "right" adjustment to content in a recommender system. Instead, more filters, such as those for WikiProjects, should be enabled in our recommender systems to make it easier for editors to discover campaigns that they might be interested in or filter to content-gap-related topics.