Jump to content

Research:ORES Inspect: A technology probe for machine learning audits on enwiki

From Meta, a Wikimedia project coordination wiki
Created
22:05, 26 March 2021 (UTC)
Duration:  2021-January – 2024-June
This page documents a completed research project.


We designed a Toolforge tool called ORES-Inspect.[1] ORES-Inspect helps Wikipedians inspect the predictions that ORES makes about whether individual edits are damaging or not. In other words, ORES-Inspect is a tool for auditing the ORES edit quality model. While it's still in beta development, it is accessible at this Toolforge URL.

"Auditing" is just a fancy word for testing: determining if a system's outputs are expected. As a system, the ORES edit quality model tries to predict whether an individual edit is damaging: is an edit likely to be against consensus? As researchers, we're interested in auditing ORES because auditing helps wikipedians reflect "on current quality control processes and the role of algorithms in those processes".[2]

ORES-Inspect helps people audit by examining older edits where ORES prediction of an edit's quality diverged from the community's response to that edit. The interface shows enwiki edits from the year 2019 that were either (a) predicted to be damaging by ORES but were not reverted or (b) predicted to be non-damaging by ORES but were reverted. Filter controls enable ORES-Inspect users to examine only edits that meet certain criteria, such as edits on a set of related pages or edits from newcomers. By finding individual cases where the ORES model made an incorrect prediction, users can assemble groups of similar misclassifications that indicate bugs that should be fixed in future ORES models. Currently, ORES-Inspect is under active development.

If you have questions or thoughts, feel free to contact us via the talk page, on Suriname0's talk page, or via email.

Sign-up

[edit]

If you want to be pinged about future software releases and questions related to ORES-Inspect, edit your name into the list below.

Background and Methods

[edit]

The basic idea behind a "technology probe"[3] is to learn about a process by seeing how people respond to a new tool designed for making that process explicit. In this case, we are designing ORES-Inspect for Wikipedians to both learn about the predictions that ORES makes and to audit ORES' role in the Wikipedia ecosystem. We plan to do a usability study where we ask Wikipedians to use ORES-Inspect while we ask them about the process of auditing ORES.

Auditing a machine learning system like ORES involves comparing that system's predictions to the expected outputs. For the ORES edit quality models, we expect a revision that diverges from consensus to be scored as damaging and to be reverted by the community. We expect a revision in line with community consensus to receive a low score. But consensus is a tricky thing: not only does consensus change over time, but also each wikipedian will bring their own unique perspective. To audit ORES, individual editors need an understanding of this consensus and their own perspective in order to determine where ORES diverges from that consensus. If ORES' predictions consistently diverge from consensus for some set of edits, that is a bug that can be addressed by the ORES developers. ORES' predictions matter because they influence the likelihood of an edit being reverted.[4]

My own experiences trying to audit ORES reveal some of the challenges with auditing as a process: knowing what the "expected outputs" are can be half the task. ORES-Inspect aims to make it easier to learn about consensus and revert processes on Wikipedia by allowing editors to specifically inspect revisions that are and are not reverted. (Many wikipedians will know that whether a revisions is reverted or not depends on many factors: the existing quality of the page, the number of watchers, the visibility of the damage, conflict with "owners", etc.) Even if the consensus is clear, the costs of ORES making a mistake will differ between editors. If I care primarily about "how ORES affects newcomer retention" vs "vandalism on stub-class articles" vs "edits on pages in some category", I may interpret ORES' mistakes as having very different costs. Historically, these differences have tended to be invisible; viewing ORES predictions for a specific set of relevant edits surfaces these differences and allows an auditor to focus on the issue they care about. I have a lot of thoughts about auditing and how it should influence the design of ORES-Inspect, so this paragraph is just a teaser. If you have thoughts, we want to talk to you! Make sure to sign up on the list above.

Timeline

[edit]

Timeline of existing and future activities.

Date Activities Deliverables
2021/01 Initial project launch Design wireframe and data needs analysis
2021/02 Lu Li (Macalester) and Phyllis Gan (UMN) join the project as developers --
2021/03 Construct database from dumps ToolsDB database with historical rev data, indexed (currently: 11GB without full-text indices). See T277609 for info about ORES predictions.
2021/05 Receive IRB determination Determined to be "Not Human Research"; see ethics section below.
2021/09 Jada Lilleboe (UMN) and Lauren Hagen (UMN) join the project as developers --
2022/04 Launch initial interface on Toolforge Check this URL for alpha release
2024/06 Presented work at Wiki Workshop 2024 Pre-print
Future Interview tool users about auditing and ORES-Inspect usability Qualitative interview notes

Policy, Ethics and Human Subjects Research

[edit]

This study was reviewed by the University of Minnesota's IRB. The IRB determined that our proposed activity is not research involving human subjects as defined by DHHS and FDA regulations. Once the ORES-Inspect tool officially launches, we'll be collecting interview data and analyzing the usage data of ORES-Inspect. Have a concern? Please let us know.

See Also

[edit]

Other conceptually related research:

References

[edit]
  1. Levonian, Zachary; Hagen, Lauren; Li, Lu; Lilleboe, Jada; Wastvedt, Solvejg; Halfaker, Aaron; Terveen, Loren (2024-06-12), ORES-Inspect: A technology probe for machine learning audits on enwiki, doi:10.48550/arXiv.2406.08453, retrieved 2024-07-12 
  2. Halfaker, Aaron; Geiger, R. Stuart (2020-10-14). "ORES: Lowering Barriers with Participatory Machine Learning in Wikipedia". Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2): 1–37. ISSN 2573-0142. doi:10.1145/3415219. 
  3. Hutchinson, Hilary; Hansen, Heiko; Roussel, Nicolas; Eiderbäck, Björn; Mackay, Wendy; Westerlund, Bo; Bederson, Benjamin B.; Druin, Allison; Plaisant, Catherine; Beaudouin-Lafon, Michel; Conversy, Stéphane (2003). "Technology probes: inspiring design for and with families". Proceedings of the conference on Human Factors in Computing Systems - CHI '03 (Ft. Lauderdale, Florida, USA: ACM Press): 17. ISBN 978-1-58113-630-2. doi:10.1145/642611.642616 – via ACM. 
  4. TeBlunthuis, Nathan; Hill, Benjamin Mako; Halfaker, Aaron (2020-06-04). "The effects of algorithmic flagging on fairness: quasi-experimental evidence from Wikipedia". arXiv:2006.03121 [cs].