Jump to content

Community Wishlist Survey 2023/Anti-harassment/Allow abuse filters to be hidden to only oversighters

From Meta, a Wikimedia project coordination wiki

Allow abuse filters to be hidden to only oversighters

  • Problem: Often, the best way to prevent mass-doxxing of users is by an abuse filter. However, filtering doxxing attempts will usually involve including private information in the filter rules. The abuse log will also contain the private information that the abuse filter is preventing from disclosure, which a human oversighter will have to manually suppress. The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted.
  • Proposed solution: Abuse filters should have an option to automatically suppress the abuse log for the filter. Abuse filters should also have a separate option to restrict the ability to view and edit the filter to oversighters when the filter rule contains private data.
  • Who would benefit: Oversighters, who otherwise have to manually suppress filter hits; Doxxing victims, who otherwise have their personal information disclosed to a larger group of people
  • More comments:
  • Phabricator tickets: phab:T290324
  • Proposer: AntiCompositeNumber (talk) 18:56, 23 January 2023 (UTC)[reply]

Discussion

  • How often does this issue actually arise? I'm been an admin on en-wiki (which is the project that typically has the greatest problem with long-term stalking/doxxing) for fifteen years, and in the past have been a checkuser, oversighter, and arbitrator there, and could probably count the number of occasions I'd have found this feature useful on one hand. In most cases, administrators being able to view the edit filters is a feature, not a bug; edit filters have a bad habit of triggering false positives, and restricting the ability to view them to oversighters would on some projects mean literally only a couple of people—who likely won't be experienced with regex bug-testing—have the ability to address any bugs in the filters. Iridescent (talk) 07:16, 24 January 2023 (UTC)[reply]
    I can think of at least three times in the past 30 days this would have been useful, including one that is going to mean there will be a noticeable jump in the number of suppressions on enwiki during January. I would hope any OS who isn't comfortable with regex and abuse filter testing would seek out help - either from a fellow OS or from a steward - before attempting to use on their wiki. Best, Barkeep49 (talk) 13:25, 24 January 2023 (UTC)[reply]
    Hi. I'm the author of the phab ticket. I'm sysop, oversighter and abusefilter editor on fr-wp; we encountered several cases of doxing (with threats of violence), by a small number of LTA, but those have been very active. It required the use of abusefilter, and using real names of wikipedians in an abusefilter, even if not linked to the wikipedians usernames, is not comfortable at all. On fr-wp, 5 of the 6 OS have abusefilter rights, and I'm one of the main users of abusefilters. Having the possibility to use an OS-only abusefilter does not mean that it should be used on every wiki. Best, — Jules* talk 16:00, 29 January 2023 (UTC)[reply]
  • Just noting that as a volunteer I did a tiny bit of work on this feature, the (work-in-progress/nowhere near complete) results of which I've just committed to a branch. — TheresNoTime (talk • they/them) 20:29, 24 January 2023 (UTC)[reply]
  • I can get behind something like this, but what would the abuse filter regex look like? Would it just be a small database of previously posted addresses and the like, or would it a filter to catch the pattern of an address? TheManInTheBlackHat (talk) 18:22, 28 January 2023 (UTC)[reply]
    @TheManInTheBlackHat: on fr-wp, we have an abusefilter against doxxing, mostly against the divulgation of real names of editors. The regex goal is to catch those real names (and variations). Best, — Jules* talk 16:02, 29 January 2023 (UTC)[reply]
  • Agreed that this is a needed feature and hope that it can have enough community support to be selected as one of this year wishes. Thanks, —MarcoAurelio (talk) 10:50, 6 February 2023 (UTC)[reply]
  • I understand the motivation behind this proposal, but such a feature would seem highly problematic and ripe for abuse itself, as it amounts to the ability of a very small group of users to block arbitrary content without scrutiny even from admins. (The proposal doesn't mention any intention to technically limit this ability to actual doxing content like phone numbers or names, nor does it seem feasible to do so.) I see no reason to doubt the good intentions of the proposer and other oversighters who are advocating here for giving them this extreme power, i.e. to assume that they intend to use it for anything beyond the stated purpose. However, once such a feature is deployed, we cannot assume that in the years and decades to come, all users with that right across all wikis will consistently resist that temptation. For example, recall the Croatian Wikipedia situation where an entire Wikipedia (including ArbCom etc.) was dominated for many years by a small group of power users who imposed their nationalist viewpoint via admin tools (whose use is publicly logged). phab:T290324 would have been the perfect tool for this group (say to thwart the mention of war crime convictions in certain BLPs - one of the cases examined in the linked report -, perhaps with a pseudo-justification in the filter description referring to the "Removal of potentially libelous information" clause of the oversight policy). Access to such an oversighter feature might well have enabled this group to evade for much longer the public scrutiny that eventually brought them down. Besides, even in the current setup where oversight actions are still retroactive and to a considerable extent open to review by non-oversighters, it is not unheard of that some (a minority) of oversighters occasionally overstep their remit and apply interpretations of policy that at the very least stretch the local or global community's consensus. That's another reason to not enable oversighters to operate completely without scrutiny even by admins. Regards, HaeB (talk) 15:22, 6 February 2023 (UTC)[reply]
    @HaeB if oversighters are abusing such a filter, people can turn to the Ombuds commission and these oversighters will lose their rights much faster than the Croatian admins you mentioned.
    Not sure if you're aware of this, but like many other projects dewiki uses a filter against potential doxxing too. Regular admins (many of them did not sign the WMF non disclosure agreement) should not be able to see the content + log of this filter, which they can access right now. Johannnes89 (talk) 16:07, 6 February 2023 (UTC)[reply]
    That's entirely besides the point. "people can turn to" the ombuds commission or other channels only if they are able to notice and document such policy violations (not to speak of the policy knowledge, skill and energy that may be required for filing a successful complaint). And this proposal will drastically reduce the number of people with that ability, by removing all admins (who otherwise could - and often do, as Irisdescent said and I'm sure you are aware anyway - notice and address issues with abuse filters like their frequent false positives).
    In the hypothetical Croatian "libel" filter example above, the group of people encountering it may only consist of those unlucky editors who try to edit a particular BLP to add the war crime conviction and get their edit or even account blocked with a (to them) rather cryptic message. We know that inexperienced editors, including subject matter experts, often find it difficult enough already to understand why their contributions are rejected by a publicly logged revert or deletion, and are rarely able to mount the actions and policy arguments necessary to overcome a mistaken or abusive reaction even when it is open for public scrutiny. So I really don't know where your confidence comes from that such an abusive abuse filter (that blocks edits that are not in fact libelous/oversightable, but can only be inspected by a very small group of fellow oversighters and stewards who may not even speak the wiki's language) would lead to its author "los[ing] their rights much faster than the Croatian admins you mentioned".
Regards, HaeB (talk) 18:29, 6 February 2023 (UTC)[reply]
The level of abuse you are imagining is already possible with private (=admin-only) filters. If oversighters abuse the newly created filter, at least there is an official instance (Ombuds commission) to turn to, which is much easier then following RfC procedures and getting help of the global community in case of admin abuse.
I don't see much more risk for abuse then already present with private filters – but I do see a problem with OS-filter content being visible to admins currently. Johannnes89 (talk) 19:54, 6 February 2023 (UTC)[reply]
The level of abuse you are imagining is already possible with private (=admin-only) filters - I'm sorry, but I'm not sure you actually read my entire comment, which is not about the level of seriousness of the abuse itself but about the size of the group of people who would be able to notice and scrutinize it. (And to your earlier remark about dewiki, yes, as a longtime dewiki admin myself - and former checkuser who e.g. wrote large parts of the project's local CU policy that are still in place today - I'm aware that dewiki uses them. I imagine you are referring to de:Special:abusefilter/267 in particular, which you and others are maintaining regarding doxxing. Besides blocking the kind of information that the proposal talks about, this filter honestly also already contains some questionable entries - the title of an entire book that has been the subject of legit community discussion and is currently still listed in a mainspace article? A good illustration of how such things might veer off course.)
Regards, HaeB (talk) 00:29, 7 February 2023 (UTC)[reply]
OS-level abusefilters logs should be accessible like any other AF log. (But, obviously, logs entries containing private data would be suppressed, as it is already done currently.)
@HaeB, I don't get what would be the difference with oversight tools current use: oversights could already abuse their tools to revert and suppress contents, without anyone outside OS, stewards and Ombuds commission (OC) being able to check if there is an abuse. Plus only big wikis (20) have oversighters, so only those would have OS-level abusefilters.
However, regarding your concern (even if I don't share it), there could be on meta-wiki a list of all OS-level filters of all wikis (it is not expected to have dozens of it per wiki), so there would be a special scrutinity about it by OC.
Best, — Jules* talk 19:54, 6 February 2023 (UTC)[reply]
oversights could already abuse their tools to revert and suppress contents - There's a big difference between oversighting edits (and log entries) retroactively, and preventing them from being made in the first place. That's after all a main rationale for using abuse filters. OS-level abusefilters logs should be accessible like any other AF log - not according to the proposal, which asks that "Abuse filters should have an option to automatically suppress the abuse log for the filter" (without limiting the suppression to only sensitive parts of the log, say).
Similar to Johannnes89 above, you are basically contradicting yourself here, arguing on the one hand that the proposed change would not meaningfully decrease transparency when it comes to its benefits (ability of admins to scrutinize filters and address problems with them), but is necessary to decrease transparency when it comes to its downsides (increased exposure of some sensitive information to admins). Regards, HaeB (talk) 00:29, 7 February 2023 (UTC)[reply]
  • Yes: I disagree with the proposal regarding the logs auto-suppress; I'm against it.
  • No: I do not argue that the proposed change would not decrease transparency (yes, it would for AF editors, that is the point of it), I argue that it would not be less transparent that any suppress action.
Best, — Jules* talk 00:45, 7 February 2023 (UTC)[reply]
  • Comment Comment: Looking at the above concerns raised by HaeB, might this benefit from establishing community consensus for such a feature existing prior to any (potential) work being done on it? The alternative, should this proposal get selected for work, is the implementation of a feature which then ends up unused while discussions take place and policies get built — TheresNoTime-WMF (talk • they/them) 19:27, 6 February 2023 (UTC)[reply]
  • Comment Comment @LD: there are dozens of AF editors on several wikis (144+25 on en-wp!), and any sysop can ask to be AF editor. The whole point of OS is to keep some private datas... private, by limiting the number of people who can access it. It would have more sens to check that enough OS are AF editors, and recruiting some if needed. Best, — Jules* talk 23:04, 10 February 2023 (UTC)[reply]
    @Jules* I mean: AF config has abusefilter-hide-log (right to hide logs) which can remain a OS right, and it also has abusefilter-hidden-log (right to see hidden logs), usually given to OS. Nevertheless, wikis can't really make AF editors able to have the "abusefilter-hidden-log" right, even if it doesn't give access to suppressed logs and revisions, since there's no policy (as CU policy, OS policy, etc.) allowing AF editors to sign Access to Non-Public Personal Data Policy and Confidentiality Agreement. I'm not saying any AF editor should be able to see AF hidden logs, I'm saying any wiki should be able to make it possible if it meets its scope. LD (talk) 23:23, 10 February 2023 (UTC)[reply]
    I don't understand. Why do you want to give abusefilter-hidden-log to non-OS, as it would mean that non-OS could access suppressed contents? (If you want to say that there should be a sysop-level hiding right for logs, in addition to the ability for OS to suppress contents, I agree, but this is not the subject of this proposal.) — Jules* talk 23:34, 10 February 2023 (UTC)[reply]
    From proposal : "The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted."
    That's not true, it depends on the AF config : by setting abusefilter-log-private for OS only, then you only allow OS to see private logs, no one else. You could also set it on AF and OS only. (No point at the moment since there's no policy for AF users.) So, it depends on wiki and meta scopes.

    AF users will be in contact with private details no matter what. For instance, AF extension won't erase added IP addresses in filters after at most 90 days. LTA-based filters are used as retention in order to keep identifying a person. Even non public details can be added to filters after getting an email from a CU. That's why my thinking is linked to this proposal since policy about privacy is the main subject.

    By contrast, this not a concern for #Allow checkusers to use user-agent variables in Abusefilters & #Allow checkusers to use XFF variable in Abusefilter : dev could create an encrypting export for retrieved private data from CU extension, then you import it in a filter. But you can't encrypt unexpected private details from any wiki user. Of course suppress is needed, but there are benefits to let AF users to check why suppressed logs matched to filters at the first place.

    From proposal : "The abuse log will also contain the private information that the abuse filter is preventing from disclosure".
    I can't disagree with you : OS users keep data confidential. Why they do so? Because they sign for not disclosing.
    AF do not sign for it. We "hope" they do not. LD (talk) 00:55, 11 February 2023 (UTC)[reply]
    No, I think you don't get it @LD ;-). "The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted." does reffer to the fact that (anti-doxxing) abusefilters (not logs) containing private data are accessible by all AF editors; they should be only accessible to oversighters.
    The proposal has two parts:
    • create abusefilters only visible to OS, in order to correct the current situation described above;
    • allow an auto-suppress of logs of those newly created OS-level abusefilters (it only means allowing actions that are currently already manually done by OS to be done automatically for some filters).
    — Jules* talk 10:30, 11 February 2023 (UTC)[reply]

Voting