Jump to content

Machine learning models/Production/Wikidata goodfaith edit

From Meta, a Wikimedia project coordination wiki


Model card
This page is an on-wiki machine learning model card.
A diagram of a neural network
A model card is a document about a machine learning model that seeks to answer basic questions about the model.
Model Information Hub
Model creator(s)Aaron Halfaker (User:EpochFail) and Amir Sarabadani
Model owner(s)WMF Machine Learning Team (ml@wikimediafoundation.org)
Model interfaceOres homepage
CodeORES Github, ORES training data, and ORES model binaries
Uses PIINo
In production?Yes
Which projects?Wikidata
This model uses data about a revision to predict the likelihood that the revision is in good faith.


Motivation

[edit]

Not all damaging edits are vandalism. This model is intended to differentiate between edits that are intentionally harmful (badfaith/vandalism) and edits that are intended to be harmful (good edits/goodfaith damage). The model provides a guess at whether or not a given revision is in good faith, and provides some probabilities to serve as a measure of its confidence level. This model was inspired by research of Wikipedia's quality control system and the potential for vandalism detection models to also be used as "goodfaith newcomer" detection systems.[1]

Users and uses

[edit]
Use this model for
  • This model should be used for prioritizing the review and potential reversion of vandalism on Wikidata.
  • This model should be used for detecting goodfaith contributions by editors on Wikidata.
Don't use this model for
  • This model should not be used as an ultimate arbiter of whether or not an edit ought to be considered good faith.
  • The model should not be used outside of Wikidata.
Current uses
  • Wikidata uses the model as a service for facilitating efficient edit reviews or newcomer support.
  • On an individual basis, anyone can submit a properly-formatted API call to ORES for a given revision and get back the result of this model.
Example API call:
https://ores.wikimedia.org/v3/scores/wikidatawiki/1907686315/goodfaith

Ethical considerations, caveats, and recommendations

[edit]

Wikidata decided to use this model. Over time, the model has been validated through use in the community.

This model is known to give newer editors lower probability of editing in good faith.

Internal or external changes that could make this model deprecated or no longer usable are:

  • Data drift means training data for the model is no longer usable.
  • Doesn't meet desired performance metrics in production.
  • Wikidata community decides to not use this model anymore.

Model

[edit]

Performance

[edit]

Test data confusion matrix:

Label n ~True ~False
True 14035 13391 644
False 2100 229 1871

Test data sample rates:

Rate Sample Population
sample 0.87 0.13

Test data performance:

Statistic True False
match_rate 0.844 0.156
filter_rate 0.156 0.844
recall 0.954 0.891
precision 0.983 0.744
f1 0.968 0.811
accuracy 0.946 0.946
fpr 0.109 0.046
roc_auc 0.977 0.973
pr_auc 0.995 0.802

Implementation

[edit]
Model architecture
{
    "type": "GradientBoosting",
    "params": {
        "scale": true,
        "center": true,
        "labels": [
            true,
            false
        ],
        "multilabel": false,
        "population_rates": null,
        "ccp_alpha": 0.0,
        "criterion": "friedman_mse",
        "init": null,
        "learning_rate": 0.1,
        "loss": "deviance",
        "max_depth": 7,
        "max_features": "log2",
        "max_leaf_nodes": null,
        "min_impurity_decrease": 0.0,
        "min_samples_leaf": 1,
        "min_samples_split": 2,
        "min_weight_fraction_leaf": 0.0,
        "n_estimators": 700,
        "n_iter_no_change": null,
        "random_state": null,
        "subsample": 1.0,
        "tol": 0.0001,
        "validation_fraction": 0.1,
        "verbose": 0,
        "warm_start": false
    }
}
Output schema
{
    "title": "Scikit learn-based classifier score with probability",
    "type": "object",
    "properties": {
        "prediction": {
            "description": "The most likely label predicted by the estimator",
            "type": "boolean"
        },
        "probability": {
            "description": "A mapping of probabilities onto each of the potential output labels",
            "type": "object",
            "properties": {
                "true": {
                    "type": "number"
                },
                "false": {
                    "type": "number"
                }
            }
        }
    }
}
Example input and output
Input:
https://ores.wikimedia.org/v3/scores/wikidatawiki/1907686315/goodfaith

Output:

{
    "wikidatawiki": {
        "models": {
            "goodfaith": {
                "version": "0.5.0"
            }
        },
        "scores": {
            "1907686315": {
                "goodfaith": {
                    "score": {
                        "prediction": true,
                        "probability": {
                            "false": 2.1273362960094744e-07,
                            "true": 0.9999997872663704
                        }
                    }
                }
            }
        }
    }
}

Data

[edit]
Data pipeline
Tabular data about edits is collected from the Mediawiki API, preprocessed (via log-transformations, joining with public editor data, etc.), and joined with user-generated goodfaith/damaging labels.
Training data
This model was trained using hand-labeled training data that is several years old.
Test data
The statistics reported here were calculated by selecting a random partition of the training data to hold out from the training process. The model then makes a prediction on that data, which is compared to the underlying ground truth.

Licenses

[edit]

Citation

[edit]

Cite this model card as:

@misc{
  Triedman_Bazira_2023_wikidata_goodfaith,
  title={ Wikidata goodfaith model card },
  author={ Triedman, Harold and Bazira, Kevin },
  year={ 2023 },
  url={ https://meta.wikimedia.org/wiki/Machine_learning_models/Production/Wikidata_goodfaith_edit }
}
  1. Halfaker, A., Geiger, R. S., & Terveen, L. G. (2014, April). Snuggle: Designing for efficient socialization and ideological critique. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 311-320).