Jump to content

Template:Model card ORES damaging edit no stats

From Meta, a Wikimedia project coordination wiki
Model card
This page is an on-wiki machine learning model card.
A diagram of a neural network
A model card is a document about a machine learning model that seeks to answer basic questions about the model.
Model Information Hub
Model creator(s)Aaron Halfaker (User:EpochFail) and Amir Sarabadani
Model owner(s)WMF Machine Learning Team (ml@wikimediafoundation.org)
Model interfaceOres homepage
CodeORES Github, ORES training data, and ORES model binaries
Uses PIINo
In production?Yes
Which projects?English Wikipedia
This model uses data about a revision to predict the likelihood that the revision is damaging.


Motivation

[edit]

Some goodfaith edits are damaging to an article, and not all damaging edits are in bad faith. This model (together with a goodfaith model) is intended to differentiate between edits that are intentionally harmful (badfaith/vandalism) and edits that are intended to be harmful (good edits/goodfaith damage).

This model helps to prioritize review of potentially damaging edits or vandalism. It provides a prediction on whether or not a given revision is damaging, and provides some probabilities to serve as a measure of its confidence level.

Users and uses

[edit]
Use this model for
  • This model should be used for prioritizing the review and potential reversion of vandalism on {{{language}}} {{{project}}}.
  • This model should be used for detecting damaging contributions by editors on {{{language}}} {{{project}}}.
Don't use this model for
  • This model should not be used as an ultimate arbiter of whether or not an edit ought to be considered damaging.
  • The model should not be used outside of {{{language}}} {{{project}}}.
Current uses
  • {{{language}}} {{{project}}} uses the model as a service for facilitating efficient vandalism triage, edit reviews, or newcomer support.
  • On an individual basis, anyone can submit a properly-formatted API call to ORES for a given revision and get back the result of this model.
Example API call:
{{{model_input}}}

Ethical considerations, caveats, and recommendations

[edit]

{{{language}}} {{{project}}} decided to use this model. Over time, the model has been validated through use in the community.

This model is known to give newer editors higher probability of damaging edits.

Internal or external changes that could make this model deprecated or no longer usable are:

  • Data drift means training data for the model is no longer usable.
  • Doesn't meet desired performance metrics in production.
  • {{{language}}} {{{project}}} community decides to not use this model anymore.

Model

[edit]

Implementation

[edit]
Model architecture
{{{model_architecture}}}
Output schema
{{{model_output_schema}}}
Example input and output
Input:
{{{model_input}}}

Output:

{{{model_output}}}

Data

[edit]
Data pipeline
Tabular data about edits is collected from the Mediawiki API, preprocessed (via log-transformations, joining with public editor data, etc.), and joined with user-generated goodfaith/damaging labels.
Training data
This model was trained using hand-labeled training data that is several years old.
Test data
The statistics reported here were calculated by selecting a random partition of the training data to hold out from the training process. The model then makes a prediction on that data, which is compared to the underlying ground truth.

Licenses

[edit]

Citation

[edit]

Cite this model card as:

@misc{
  Triedman_Bazira_2023_{{{language}}}_{{{project}}}_damaging,
  title={ {{{language}}} {{{project}}} damaging model card },
  author={ Triedman, Harold and Bazira, Kevin },
  year={ 2023 },
  url={ https://meta.wikimedia.org/wiki/Model_card_ORES_damaging_edit_no_stats }
}