Jump to content

Community Wishlist Survey 2023/Anti-harassment

From Meta, a Wikimedia project coordination wiki
Anti-harassment
9 proposals, 250 contributors, 392 support votes
The survey has closed. Thanks for your participation :)



Allow checkusers to use XFF variable in Abusefilters

Discussion

Voting

Notifications for user page edits

  • Problem: If your user page is modified by a malicious user, you may not notice if your watchlist is overflowed. Unlike a normal article, it is very unlikely that another user will revert or warn about the malicious changes for you. Unlike article vandalism, user page vandalism can affect the unwitting users standing in the community.
  • Proposed solution: Generate notifications for user page modifications by other users, just like user talk notifications work now. Another solution would be to protect user pages from modification, but some edits may be friendly and even useful.
  • Who would benefit: Users whose user page has been vandalized.
  • More comments: It could be made configurable per user. This is a re-submission of Community Wishlist Survey 2022/Anti-harassment/Notifications for user page edits by Error.
  • Phabricator tickets: phab:T3876
  • Proposer: MarioGom (talk) 16:54, 28 January 2023 (UTC)[reply]

Discussion

Voting

Allow checkusers to use user-agent variables in Abusefilters

  • Problem: With AbuseFilters, there are several ways to target long-term abusers and harassers, including IP adresses, but we lack one which could be powerful: user agent (UA).
  • Proposed solution: Add a user-agent variable to AbuseFilter. This would require first that CheckUser-level abuse filters to be created.
  • Who would benefit: Everyone, especially victims of harassment; sysops, patrollers and anyone that fights long-term abuse (with harassment or not); checkusers who are abusefilters editors.
  • More comments: The only downside that I see is that it would require some CheckUsers to be familiar with Abusefilters and regexes.
  • Phabricator tickets: phab:T234155 and phab:T50623
  • Proposer: — Jules* talk 20:06, 2 February 2023 (UTC)[reply]

Discussion

  • User-agent's design is well-known. Such "Mozilla/5.0 (Linux; Android 6.0; HTC One M9 Build/MRA58K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.98 Mobile Safari/537.3" for HTC One M9's mobile could be handled by a few variables, depends how precise we need to be : kernel from a database [equiset] and inherited from browser, browser = "Chrome 52.0.2743.98" [browser_version ?], device = "HTC One M9", system = "Android 6.0". Even if vars are regex-alike, a Toolforge tool can easily parse and convert an UA's string into regexes (mainly . into \., [0-9] into \d). LD (talk) 20:59, 2 February 2023 (UTC)[reply]
  • phab:T242825 should probably resolved first. The current status from Chrome suggests time is running out, and Firefox is in support so they will likely follow suit. I'm thinking if anything, someone should re-propose Community Wishlist Survey 2022/Anti-harassment/Deal with Google Chrome User-Agent deprecation. Would you be interested in that, Jules*? I think it's going to be worked on regardless, but the proposal will help ensure it has the urgency it needs. MusikAnimal (WMF) (talk) 02:42, 6 February 2023 (UTC)[reply]
    Maybe @LD or Hyméros could re-propose that, as they better know the subject than myself? — Jules* talk 16:53, 6 February 2023 (UTC)[reply]
    Re-proposing a known whish is irrelevant (at least not recommended). RFC1945's design, or RFC8942's design, doesn't matter much in front end. It's all about matching properties retrieved in Abuse Filter. At some point, SEC-CH-UA might be used instead of UA, but while waiting for the migration, the request seems justified to me. On top of that, SEC-CH-UA seems to not be uncompatible with the purposes of this request [1][2]. LD (talk) 17:44, 6 February 2023 (UTC)[reply]
    Well, as I said, I assume the user agent deprecation will be worked on anyway. And you're right; whether it's client hints or UAs, we can use whatever one is available and expose in CheckUser.
    We just approved Allow checkusers to use XFF variable in Abusefilters knowing it involves phab:T234155 which the more significant amount of work. So I'm going to approve this one, too. According to a blog post from Google, UAs as we know them will largely be gone in just a matter of months, and thus probably not that useful in AbuseFilter. But there other browsers than just Chrome, and automation can still supply a custom UA. So I think there's something to do here regardless of what happens.
    I'll approve this now. Thanks for participating in the survey! MusikAnimal (WMF) (talk) 21:28, 7 February 2023 (UTC)[reply]
    Hello everybody. I just want to emphasize that in the few cases where we need to track the "agents", it always involves the use of mobile phones, sometimes with the added use of WikiApp. In this case, the phone model is the final discriminant element. The ability to select or sort based on one of the agents would really save us hours. Hyméros --}-≽ Yes ? 17:47, 8 February 2023 (UTC)[reply]

Voting

Minimize Wikimedia/Wikipedia's risk by enforcing 2FA on 'Mandatory Use User' groups

  • Problem: Even though we know, It's extremely important for administrators and editors with advanced permissions to keep their accounts secure, Not everyone in the Mandatory use user groups & SSH key Wikitech users had been enabled 2FA security in their account. If any of these accounts are compromised, it could cause widespread disruption and vandalism in Wikimedia/Wikipedia.
  • Proposed solution:  
  1. Implement T242031. Minimize the situation where people get locked out of their accounts, as much as possible.
  2. Give them a private message and a month to familiarize themselves with 2FA.
  3. Then add them to $wgOATHRequiredForGroups. Prevent them from using their rights until they enable 2FA.
If we can implement it smartly, then Foundation won't be needing any paid staff to act as support representatives.
  • Who would benefit: It will minimize Wikimedia/Wikipedia's risk of being compromised.
  • More comments: This way, we can get one step closer to making this possible for all concerned editors. The security team and community tech team should work together on this community wish.
  • Phabricator tickets: T150898, T242031
  • Proposer: MASUM THE GREAT (talk) 23:19, 30 January 2023 (UTC)[reply]

Discussion

  • Just a demo notification. Getting the credentials for these accounts is improbable but not impossible.
    Just a demo notification. Getting the credentials for these accounts is improbable but not impossible.
  • If any ill-intention expert hacker can get access for 10 minutes in any of these accounts, just imagine how much damage could be done to Wikimedia web sister projects!
    If any ill-intention expert hacker can get access for 10 minutes in any of these accounts, just imagine how much damage could be done to Wikimedia web sister projects!
  • By not enabling 2FA (or improving security) on these accounts, we are actually challenging non-admirer hackers to use brute-force cracking or other methods.
    By not enabling 2FA (or improving security) on these accounts, we are actually challenging non-admirer hackers to use brute-force cracking or other methods.

This was a wish on the previous 2019 wishlist survey, proposed by MASUM THE GREAT, and ranked #10.--MASUM THE GREAT (talk) 15:03, 1 February 2023 (UTC)[reply]

  • This probably should be in the Anti-harassment section, not Multimedia and Commons? And the more relevant task is T150898 I think. --Tgr (talk) 02:15, 1 February 2023 (UTC)[reply]
    Someone, please do that. Many thanks. -- MASUM THE GREAT (talk) 08:41, 1 February 2023 (UTC)[reply]
    @Tgr and Ahm masum: Moved, and the other Phabricator task added. Thanks! SWilson (WMF) (talk) 12:29, 1 February 2023 (UTC)[reply]
    Disagree that this should be in anti-harassment, unless every single security issue is also in anti-harassment. There's no harassment element in people failing to use 2FA. This is targeted at users who are already *supposed* to have 2FA in place; the overwhelming majority of them keep 2FA in place once they have it, so there's no reason that a hypothetical hacker would go after specific accounts. Risker (talk) 03:29, 21 February 2023 (UTC)[reply]
    @Risker: We've already established a conscious, which is why they're called "Mandatory Use User" groups. We don't need to make the same conscious again. So can you tell me, why just 'majority', not 'all' required account holders? Can you, or any advanced permission holder, guarantee us that the current non-enabling state is a 0% security loophole? Are these non-2FA advanced permission holders not a threat to our platform with each passing day?
    Yes. I agree. To make a long term effective mass implimentainon we need to rethink/redesigh our current 2FA mathod. We must have to make it as per industry standard, automative as much as possible. We also have to keep in mind that, as a nonprofit charitable organization, we have limited resources. We can't afford to hire too many paid support representatives. -- ~ MASUM THE GREAT (talk) 13:08, 21 February 2023 (UTC)[reply]
    Large-scale websites like this, always attract non-admirers, ill-intentioned people who want to do harm. They don't need Steward credentials. Getting access to any wiki Homepage/Database for 20 minutes through any one of the advanced account holders would be enough for them to tarnish Wikipedia/wikimedia's reputation. We've already seen how we've gotten negative news coverage for silly little mistakes or through vandals.
    Yes. We will wait for a redesigned 2FA. But in the meantime, leaving a 'security loophole' in our platform isn't a wise decision. Is it, @Risker ? -- ~ MASUM THE GREAT (talk) 13:57, 21 February 2023 (UTC)[reply]
  • This proposal should must not be implemented without quite a few improvements to the 2FA process as is, in terms of set-up, use, support, how to handle globally, amongst others. Nosebagbear (talk) 13:28, 1 February 2023 (UTC)[reply]
    The people who would be affected by this proposal are already required by Foundation policy to have 2FA enabled. This would make it a technical requirement, rather than a social one. Yes, those issues need to be addressed, but this would not make the current situation any worse. HouseBlaster (talk) 21:40, 2 February 2023 (UTC)[reply]
  • Stewards/WMF Staff could have a routine audit process on this today - would likely catch most deviations. — xaosflux Talk 15:04, 12 February 2023 (UTC)[reply]
    @Xaosflux On non-crat wikis, in theory, yes. On other wikis, we can't remove permissions, so it'd be an informative campaign. Martin Urbanec (talk) 15:38, 15 February 2023 (UTC)[reply]
  • I question the problem statement that initiates this request. Administrators and editors are not amongst those who have mandatory 2FA requirements, most of those who have that requirement were verified to have 2FA enabled at the time of their accession to the positions that have mandatory 2FA. There is a limited number of individuals involved, and it should be an easy activity to ensure that they maintain 2FA through periodic scripted verification that has nothing to do with anything else in this proposal. It should be noted that the limitations of the current 2FA software are very well known, and have been for years; it was never designed or intended for broad community use, but instead was designed for use by those who have very close contact with the few individuals who can reset 2FA if the user has a problem (i.e., highest level developers, WMF staff, stewards, and a few others with a long history within the community). If the desire is to improve usage of 2FA amongst those outside of this very limited group, then the software needs a major redesign as well as dedicated ongoing multilingual support by paid employees, not just a minor tweak. There have been extremely few account hijackings over the last 20 years, and to my knowledge they have all been related to poor password hygiene on the part of the account holder. It would be more cost-effective, and considerably less work, to require a password change as a condition of granting advanced permissions. Note that I fully support the proper redesign of 2FA, but right now the current 2FA is massively below the industry standard and I do not think we should be further promoting it until it is brought up to something at least close to industry standard. Risker (talk) 03:56, 21 February 2023 (UTC)[reply]

Voting

Reduce Conflict - Change Revert workflow

  • Problem: Reverts can cause conflict and harassment, but this may be due system issues as well as editor behaviour. Causes of conflict to do wth reverts are are that discussions are not happening, or are happeing using comments on reverts rather than talk, or on user talk rather than article. Attacks on user pages has been linked to the loss of experience editors and harassment

    Currently, the revert process is:

    1. An edit is done by the soon to be revertee.
    2. Watchlist is triggered, an editor reviews with the help of ORES
    3. The reverer reverts, and a revert triggers a notification to revertee.
    4. Revertee may be upset at speed of revert (perceived injustice may cause conflict)
    5. The revertee clicks on the notification, and they are sent to diff
    6. The difference screen
Issues with this are:
  • There is no clear call to action, and the screen is powerful but complicated (uncertainty is a cause of conflict).
  • The diff screen has lots of coloured options/links, but most will cause conflict if done for the wrong reasons ([restore this version], [edit], and [Undo] etc).
  • The diff UX accidentally personalises the revert ; the revertee editor (left) vs Revertor editor name(right), and the last coloured line (so most likely to be done if confused) links to the revertor's user page and user talk).
As an aside, code repositories sometimes use mine/theirs to remove bias.
  • Proposed solution: A new preference (set to Yes for new editors) gives two choices
    1. No - Same as now
    2. Yes - the revertee to be given clear choices
      1. Add revert topic to article (or maybe project talk)
      2. revert summary, and link to diff copied to article talk, with a reply started to be filled in by the revertee, revertor receives a notification upon save
    The next step in the dispute resolution is outlined.
    • View the diff
    • Learn more about reverts
  • Who would benefit: Editors Community as it encourage dialogue see Wikipedia:Revert_only_when_necessary#Explain_reverts, and Wikipedia:Dispute_resolution
  • Risks: Editors do deletions rather than reverts.
  • More comments:
  • Phabricator tickets:
  • Proposer: Wakelamp (talk) 12:26, 6 February 2023 (UTC)[reply]

Discussion

Voting

Add watchers variable to AbuseFilter

  • Problem: Some pages are not well-watched. It can lead to unwanted vandalism for a long time (in particular the Template and Module namespaces). Being able to tag some actions on these pages could help prevent vandalism.
  • Proposed solution: Add a variable to AbuseFilter the gives the number of watchers to the page. This should be the same number currently shown in Page information.
  • Who would benefit: Everyone
  • More comments:
  • Phabricator tickets:
  • Proposer: LD (talk) 20:38, 3 February 2023 (UTC)[reply]

Discussion

  • In my opinion, the variable should ideally be "Active watchers". Let's think of an article about 2012 Olympics, for example. It may have accumulated 100s of watchers in 2012, but a large number of them probably left the project by now. Once 2012 has passed, it hasn't really attracted newer watchers. So, even with a large watcher number, vandalism may pile up. But, if we consider "Active watchers", we'd have a better idea of what articles are more prone to unattended vandalism. —‍(ping on reply)—‍CX Zoom (A/अ/অ) (let's talk|contribs) 21:49, 17 February 2023 (UTC)[reply]

Voting

Allow abuse filters to be hidden to only oversighters

  • Problem: Often, the best way to prevent mass-doxxing of users is by an abuse filter. However, filtering doxxing attempts will usually involve including private information in the filter rules. The abuse log will also contain the private information that the abuse filter is preventing from disclosure, which a human oversighter will have to manually suppress. The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted.
  • Proposed solution: Abuse filters should have an option to automatically suppress the abuse log for the filter. Abuse filters should also have a separate option to restrict the ability to view and edit the filter to oversighters when the filter rule contains private data.
  • Who would benefit: Oversighters, who otherwise have to manually suppress filter hits; Doxxing victims, who otherwise have their personal information disclosed to a larger group of people
  • More comments:
  • Phabricator tickets: phab:T290324
  • Proposer: AntiCompositeNumber (talk) 18:56, 23 January 2023 (UTC)[reply]

Discussion

  • How often does this issue actually arise? I'm been an admin on en-wiki (which is the project that typically has the greatest problem with long-term stalking/doxxing) for fifteen years, and in the past have been a checkuser, oversighter, and arbitrator there, and could probably count the number of occasions I'd have found this feature useful on one hand. In most cases, administrators being able to view the edit filters is a feature, not a bug; edit filters have a bad habit of triggering false positives, and restricting the ability to view them to oversighters would on some projects mean literally only a couple of people—who likely won't be experienced with regex bug-testing—have the ability to address any bugs in the filters. Iridescent (talk) 07:16, 24 January 2023 (UTC)[reply]
    I can think of at least three times in the past 30 days this would have been useful, including one that is going to mean there will be a noticeable jump in the number of suppressions on enwiki during January. I would hope any OS who isn't comfortable with regex and abuse filter testing would seek out help - either from a fellow OS or from a steward - before attempting to use on their wiki. Best, Barkeep49 (talk) 13:25, 24 January 2023 (UTC)[reply]
    Hi. I'm the author of the phab ticket. I'm sysop, oversighter and abusefilter editor on fr-wp; we encountered several cases of doxing (with threats of violence), by a small number of LTA, but those have been very active. It required the use of abusefilter, and using real names of wikipedians in an abusefilter, even if not linked to the wikipedians usernames, is not comfortable at all. On fr-wp, 5 of the 6 OS have abusefilter rights, and I'm one of the main users of abusefilters. Having the possibility to use an OS-only abusefilter does not mean that it should be used on every wiki. Best, — Jules* talk 16:00, 29 January 2023 (UTC)[reply]
  • Just noting that as a volunteer I did a tiny bit of work on this feature, the (work-in-progress/nowhere near complete) results of which I've just committed to a branch. — TheresNoTime (talk • they/them) 20:29, 24 January 2023 (UTC)[reply]
  • I can get behind something like this, but what would the abuse filter regex look like? Would it just be a small database of previously posted addresses and the like, or would it a filter to catch the pattern of an address? TheManInTheBlackHat (talk) 18:22, 28 January 2023 (UTC)[reply]
    @TheManInTheBlackHat: on fr-wp, we have an abusefilter against doxxing, mostly against the divulgation of real names of editors. The regex goal is to catch those real names (and variations). Best, — Jules* talk 16:02, 29 January 2023 (UTC)[reply]
  • Agreed that this is a needed feature and hope that it can have enough community support to be selected as one of this year wishes. Thanks, —MarcoAurelio (talk) 10:50, 6 February 2023 (UTC)[reply]
  • I understand the motivation behind this proposal, but such a feature would seem highly problematic and ripe for abuse itself, as it amounts to the ability of a very small group of users to block arbitrary content without scrutiny even from admins. (The proposal doesn't mention any intention to technically limit this ability to actual doxing content like phone numbers or names, nor does it seem feasible to do so.) I see no reason to doubt the good intentions of the proposer and other oversighters who are advocating here for giving them this extreme power, i.e. to assume that they intend to use it for anything beyond the stated purpose. However, once such a feature is deployed, we cannot assume that in the years and decades to come, all users with that right across all wikis will consistently resist that temptation. For example, recall the Croatian Wikipedia situation where an entire Wikipedia (including ArbCom etc.) was dominated for many years by a small group of power users who imposed their nationalist viewpoint via admin tools (whose use is publicly logged). phab:T290324 would have been the perfect tool for this group (say to thwart the mention of war crime convictions in certain BLPs - one of the cases examined in the linked report -, perhaps with a pseudo-justification in the filter description referring to the "Removal of potentially libelous information" clause of the oversight policy). Access to such an oversighter feature might well have enabled this group to evade for much longer the public scrutiny that eventually brought them down. Besides, even in the current setup where oversight actions are still retroactive and to a considerable extent open to review by non-oversighters, it is not unheard of that some (a minority) of oversighters occasionally overstep their remit and apply interpretations of policy that at the very least stretch the local or global community's consensus. That's another reason to not enable oversighters to operate completely without scrutiny even by admins. Regards, HaeB (talk) 15:22, 6 February 2023 (UTC)[reply]
    @HaeB if oversighters are abusing such a filter, people can turn to the Ombuds commission and these oversighters will lose their rights much faster than the Croatian admins you mentioned.
    Not sure if you're aware of this, but like many other projects dewiki uses a filter against potential doxxing too. Regular admins (many of them did not sign the WMF non disclosure agreement) should not be able to see the content + log of this filter, which they can access right now. Johannnes89 (talk) 16:07, 6 February 2023 (UTC)[reply]
    That's entirely besides the point. "people can turn to" the ombuds commission or other channels only if they are able to notice and document such policy violations (not to speak of the policy knowledge, skill and energy that may be required for filing a successful complaint). And this proposal will drastically reduce the number of people with that ability, by removing all admins (who otherwise could - and often do, as Irisdescent said and I'm sure you are aware anyway - notice and address issues with abuse filters like their frequent false positives).
    In the hypothetical Croatian "libel" filter example above, the group of people encountering it may only consist of those unlucky editors who try to edit a particular BLP to add the war crime conviction and get their edit or even account blocked with a (to them) rather cryptic message. We know that inexperienced editors, including subject matter experts, often find it difficult enough already to understand why their contributions are rejected by a publicly logged revert or deletion, and are rarely able to mount the actions and policy arguments necessary to overcome a mistaken or abusive reaction even when it is open for public scrutiny. So I really don't know where your confidence comes from that such an abusive abuse filter (that blocks edits that are not in fact libelous/oversightable, but can only be inspected by a very small group of fellow oversighters and stewards who may not even speak the wiki's language) would lead to its author "los[ing] their rights much faster than the Croatian admins you mentioned".
Regards, HaeB (talk) 18:29, 6 February 2023 (UTC)[reply]
The level of abuse you are imagining is already possible with private (=admin-only) filters. If oversighters abuse the newly created filter, at least there is an official instance (Ombuds commission) to turn to, which is much easier then following RfC procedures and getting help of the global community in case of admin abuse.
I don't see much more risk for abuse then already present with private filters – but I do see a problem with OS-filter content being visible to admins currently. Johannnes89 (talk) 19:54, 6 February 2023 (UTC)[reply]
The level of abuse you are imagining is already possible with private (=admin-only) filters - I'm sorry, but I'm not sure you actually read my entire comment, which is not about the level of seriousness of the abuse itself but about the size of the group of people who would be able to notice and scrutinize it. (And to your earlier remark about dewiki, yes, as a longtime dewiki admin myself - and former checkuser who e.g. wrote large parts of the project's local CU policy that are still in place today - I'm aware that dewiki uses them. I imagine you are referring to de:Special:abusefilter/267 in particular, which you and others are maintaining regarding doxxing. Besides blocking the kind of information that the proposal talks about, this filter honestly also already contains some questionable entries - the title of an entire book that has been the subject of legit community discussion and is currently still listed in a mainspace article? A good illustration of how such things might veer off course.)
Regards, HaeB (talk) 00:29, 7 February 2023 (UTC)[reply]
OS-level abusefilters logs should be accessible like any other AF log. (But, obviously, logs entries containing private data would be suppressed, as it is already done currently.)
@HaeB, I don't get what would be the difference with oversight tools current use: oversights could already abuse their tools to revert and suppress contents, without anyone outside OS, stewards and Ombuds commission (OC) being able to check if there is an abuse. Plus only big wikis (20) have oversighters, so only those would have OS-level abusefilters.
However, regarding your concern (even if I don't share it), there could be on meta-wiki a list of all OS-level filters of all wikis (it is not expected to have dozens of it per wiki), so there would be a special scrutinity about it by OC.
Best, — Jules* talk 19:54, 6 February 2023 (UTC)[reply]
oversights could already abuse their tools to revert and suppress contents - There's a big difference between oversighting edits (and log entries) retroactively, and preventing them from being made in the first place. That's after all a main rationale for using abuse filters. OS-level abusefilters logs should be accessible like any other AF log - not according to the proposal, which asks that "Abuse filters should have an option to automatically suppress the abuse log for the filter" (without limiting the suppression to only sensitive parts of the log, say).
Similar to Johannnes89 above, you are basically contradicting yourself here, arguing on the one hand that the proposed change would not meaningfully decrease transparency when it comes to its benefits (ability of admins to scrutinize filters and address problems with them), but is necessary to decrease transparency when it comes to its downsides (increased exposure of some sensitive information to admins). Regards, HaeB (talk) 00:29, 7 February 2023 (UTC)[reply]
  • Yes: I disagree with the proposal regarding the logs auto-suppress; I'm against it.
  • No: I do not argue that the proposed change would not decrease transparency (yes, it would for AF editors, that is the point of it), I argue that it would not be less transparent that any suppress action.
Best, — Jules* talk 00:45, 7 February 2023 (UTC)[reply]
  • Comment Comment: Looking at the above concerns raised by HaeB, might this benefit from establishing community consensus for such a feature existing prior to any (potential) work being done on it? The alternative, should this proposal get selected for work, is the implementation of a feature which then ends up unused while discussions take place and policies get built — TheresNoTime-WMF (talk • they/them) 19:27, 6 February 2023 (UTC)[reply]
  • Comment Comment @LD: there are dozens of AF editors on several wikis (144+25 on en-wp!), and any sysop can ask to be AF editor. The whole point of OS is to keep some private datas... private, by limiting the number of people who can access it. It would have more sens to check that enough OS are AF editors, and recruiting some if needed. Best, — Jules* talk 23:04, 10 February 2023 (UTC)[reply]
    @Jules* I mean: AF config has abusefilter-hide-log (right to hide logs) which can remain a OS right, and it also has abusefilter-hidden-log (right to see hidden logs), usually given to OS. Nevertheless, wikis can't really make AF editors able to have the "abusefilter-hidden-log" right, even if it doesn't give access to suppressed logs and revisions, since there's no policy (as CU policy, OS policy, etc.) allowing AF editors to sign Access to Non-Public Personal Data Policy and Confidentiality Agreement. I'm not saying any AF editor should be able to see AF hidden logs, I'm saying any wiki should be able to make it possible if it meets its scope. LD (talk) 23:23, 10 February 2023 (UTC)[reply]
    I don't understand. Why do you want to give abusefilter-hidden-log to non-OS, as it would mean that non-OS could access suppressed contents? (If you want to say that there should be a sysop-level hiding right for logs, in addition to the ability for OS to suppress contents, I agree, but this is not the subject of this proposal.) — Jules* talk 23:34, 10 February 2023 (UTC)[reply]
    From proposal : "The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted."
    That's not true, it depends on the AF config : by setting abusefilter-log-private for OS only, then you only allow OS to see private logs, no one else. You could also set it on AF and OS only. (No point at the moment since there's no policy for AF users.) So, it depends on wiki and meta scopes.

    AF users will be in contact with private details no matter what. For instance, AF extension won't erase added IP addresses in filters after at most 90 days. LTA-based filters are used as retention in order to keep identifying a person. Even non public details can be added to filters after getting an email from a CU. That's why my thinking is linked to this proposal since policy about privacy is the main subject.

    By contrast, this not a concern for #Allow checkusers to use user-agent variables in Abusefilters & #Allow checkusers to use XFF variable in Abusefilter : dev could create an encrypting export for retrieved private data from CU extension, then you import it in a filter. But you can't encrypt unexpected private details from any wiki user. Of course suppress is needed, but there are benefits to let AF users to check why suppressed logs matched to filters at the first place.

    From proposal : "The abuse log will also contain the private information that the abuse filter is preventing from disclosure".
    I can't disagree with you : OS users keep data confidential. Why they do so? Because they sign for not disclosing.
    AF do not sign for it. We "hope" they do not. LD (talk) 00:55, 11 February 2023 (UTC)[reply]
    No, I think you don't get it @LD ;-). "The existing private filter status is insufficient because it still allows administrators and other editors to view personal information which should be restricted." does reffer to the fact that (anti-doxxing) abusefilters (not logs) containing private data are accessible by all AF editors; they should be only accessible to oversighters.
    The proposal has two parts:
    • create abusefilters only visible to OS, in order to correct the current situation described above;
    • allow an auto-suppress of logs of those newly created OS-level abusefilters (it only means allowing actions that are currently already manually done by OS to be done automatically for some filters).
    — Jules* talk 10:30, 11 February 2023 (UTC)[reply]

Voting

Make the AbuseFilter edit window resizable and larger by default

  • Problem: AbuseFilters are a powerful tool to fight some cases of harassment. But the window used to edit AbuseFilters is ridiculously small. As a consequence, it's a chore to work on AbuseFilters, especially when working often on it.
  • Proposed solution: Make the AbuseFilter edit window larger by default (there is no valid reason for having a so small window), and make it resizable.
  • Who would benefit: Firstly, abusefilters editors; indirectly, all users subject to on-wiki harassment.
  • More comments:
  • Phabricator tickets: phab:T294856
  • Proposer: — Jules* talk 16:15, 29 January 2023 (UTC)[reply]

Discussion

Voting

Mitigate the damage caused by proxy blocks

  • Problem: Most English Wikipedia administrators live in developed countries; as a consequence, the blocking practices that have evolved are ones that work well for the internet infrastructure of developed countries, but much less so for developing countries. Specifically: 1) by the time the internet became mainstream in many developing countries, most IPv4 addresses have been reserved by richer countries, so in these countries it's common that many internet users have to share the same IP address (cgNAT), so blocks cause large-scale collaborative damage. 2) Wikipedia users in oppressive countries often need to use IP hiding services (Tor, open proxies, VPN etc) to remain safe, but these services share the same IP addresses between their users and can be weaponized by vandals, which means they tend to be blocked preemptively.
    The same can be said about mobile users as well to some extent: patterns of IP reuse are increasingly more common (see e.g. iCloud and T-Mobile), but because mobile use among developed-country power users and admins is low, the blocking practices that emerged are disruptive for mobile users.
    More information on the topic can be found on Talk:No open proxies/Unfair blocking and Diff's series of posts on the topic: [3], [4], [5]
  • Proposed solution: I think we are still in an exploratory phase with this problem, plus many potential solutions are too large for the wishlist, so it's best to leave to Community Tech to identify things they can do.
  • Who would benefit: Users who are forced to use proxies for their safety, users in less privileged countries who only have access to ISPs with proxy-like behavior, possibly mobile users.
  • More comments:
  • Phabricator tickets:
  • Proposer: Tgr (talk) 07:22, 1 February 2023 (UTC)[reply]

Discussion

  • I don't want to completely dodge the "what can we do" question, so here are some ideas (although there are probably better options, and quite likely we haven't found all of them yet):
    • Improve the usability and accessibility of notifications about / explanations of proxy blocks, and make it easier for users to request exemptions. (E.g. T243863, T265812)
    • Some sort of peer-to-peer exemption systems where users "in good standing" can exempt other users from proxy blocks (related: T189362).
    • Some kind of "proof of work" system where a user can spend some time to bypass a proxy block.
    • An exemption request system that lets you tie your request to your established internet identity (e.g. social media accounts) so it is easier to identify well-meaning users.
    • Make it easier to exempt editathon participants from range blocks, e.g. via T27000.
    --Tgr (talk) 07:22, 1 February 2023 (UTC)[reply]
  • A lot more could/should be done to prevent vandalism before blocking anon users by their IP address. Most vandals are not very sophisticated, so session/cookie based blocks should work for them, if made possible. Throttle: 3 edits in 10 minutes, 10 edits in 1 hour? Obligatory captcha for IPs after 3 edits. Short automatic block for obvious vandalisms (repeated c&p, ALL CAPS, emojis). Obligatory answer to "How did you improve this article". Know your enemy, WMF! The little measures that I mentioned are (of course) all circumventable, but do lead to measurable reduction of vandalism. Too restrictive? I think it's way more restrictive to block entire networks of some big internet providers – which is what sysops and stewards are manually doing now, plus 2o ooo en wiki bot blocks per day. ponor (talk) 10:59, 1 February 2023 (UTC)[reply]
  • @Tgr: I wonder if the IP Masking project might solve some of these problems, particularly of collateral when blocking IPs. This proposal is likely to be too big for Community Tech, but is a good candidate for the larger suggestions category. Unless you want to limit the scope of this proposal to something more discrete and manageable. DWalden (WMF) (talk) 14:16, 1 February 2023 (UTC)[reply]
    @DWalden (WMF) I figured CommTech could pick themselves something that they feel is appropriately sized - I think at this point what we have for improving the proxy block situation is a bucket of independent improvements (some large, some small) rather than a single block of work, so there is a lot of wiggle room in defining the scope. Deciding what's the most useful / feasible to work on would take some amount of research itself, that's why I figured it's better not to try to specify it as part of the wish (the guide suggests that's OK). I can try to turn the wish into something more concrete if that's preferred. Tgr (talk) 20:08, 1 February 2023 (UTC)[reply]
  • On enwiki I usually just refer the many unblock requests of this sort to WP:IPECPROXY where they can request IPBE. Do other wikis have this page or its equivalent? Daniel Case (talk) 06:08, 2 February 2023 (UTC)[reply]
    They can request it but do they actually get it? AIUI the process sometimes takes weeks due to the size of the backlog, it involves you writing an email to an unknown person and having to justify yourself (which, if e.g. you are using a proxy to circumvent a Wikipedia ban in an oppressive regime, is not the most comfortable thing to do - what are you going to say? how much you can trust the random anonymous volunteers at the other end of the email address?), and I don't think it's common for other wikis to have even that much. It's better than nothing (and more clearly exposing it to users is one of the things CommTech might be able to do) but not great. Tgr (talk) 03:01, 5 February 2023 (UTC)[reply]
    Yeah, I think some users just don't want to deal with it. One flatly told me he didn't want his email address to be able to be so easily connected to his account. We need something that works better for these users. Daniel Case (talk) 03:21, 16 February 2023 (UTC)[reply]
  • I think this actually is the result of a larger and probably not easy to solve problem: MediaWiki still relying mainly on IP addresses & Co. for vandalism/abuse prevention and sockpuppet detection, so NOP became necessary. If MediaWiki could rely on something different (what: I don't know) to handle vandalism/abuse/sockpuppet detection/prevention I feel proxy blocking would perhaps no longer be needed, or not that much maybe. Apologies for any inexactitude, as I'm not really "techy". —MarcoAurelio (talk) 11:05, 6 February 2023 (UTC)[reply]
  • Maybe unrelated, but now, as default, autopatrollers or approved bot are automatically blocked, even if they are logged-in, when their origin is from an IP range block. To me, this is a bug in good faith, since there is no consensus, and there is not any strong benefit, in blocking these trusted logged-in users (and maybe other trusted roles). phabricator:T309328 --Valerio Bozzolan (talk) 20:38, 6 February 2023 (UTC)[reply]
  • See also phab:T309328. In my opinion what is better is allow IPs or ranges to be (locally or globally) "semi-blocked", which is to block with (auto)confirmed users exempted. This will be a level between soft (anon-only) and hard (all non-IPBE) block, and will be applied to most open proxies.--GZWDer (talk) 23:09, 8 February 2023 (UTC)[reply]
  • Please see also Explore evasion methods of state-level censorship across Wikimedia movement. --Diskdance (talk) 15:08, 13 February 2023 (UTC)[reply]
  • I don't know the technical details, but maybe for those who use 2FA automatically allow the use of VPNs and proxies without changing the IP blocking of these services. --Klip game (talk) 08:53, 16 February 2023 (UTC)[reply]

Voting