Jump to content

Talk:IP Editing: Privacy Enhancement and Abuse Mitigation/Archives/2020-11

From Meta, a Wikimedia project coordination wiki

What if someone unmasks their "own" IP?

I just tried resetting my phone's connection a few times, and was bounced all over a /16 range. Suppose, from each of the 65536 addresses in this range, Bruce were to make an edit to the sandbox, or preview ~~~~. Bruce would then have all 65536 masks for this range.

  • If Bruce posted, on-wiki, a list of all 65536 mask-address pairs, would the WMF take any action?
  • If Bruce posted this list off-wiki, then what? Would you really demand an external site take down information that the WMF voluntarily provided? That sort of thing usually ends very bady.

Of course it doesn't stop at 65536 addresses. With enough coordination, the IPv4 addresses from every popular ISP could be unmasked in time. And the whole project will have been pointless. Suffusion of Yellow (talk) 19:42, 1 November 2020 (UTC)

Hi, thanks for considering this. I talked to Legal about this, and it doesn't make it useless from a legal perspective.
But also from a practical perspective, and more in general, I think looking a privacy and security as a binary, a yes/no question, isn't the best way to approach it. There's a significant difference between "theoretically possible, but very unlikely and would require a lot of time and resources" and "with no effort". An attacker has a budget. That might be time, that might be money, it might be patience and effort. A lot of things we do to protect editors have theoretical ways around them. They're still worth doing, because they make us more secure in editing by making it more difficult to attack us. Also, depending on how we handle persistence, the mask for an IP that no one uses to edit our wikis for a meaningful period of time would likely change to a new mask. /Johan (WMF) (talk) 03:08, 5 November 2020 (UTC)

Edit filters

I haven't seen any mention of edit / abuse filters, so I thought I would just raise this for those managing the todo issue list. On enwiki, and I'm sure other wikis, we sometimes rely on the IP address variable in edit filters. If I can give just one relevant public example: "CongressEdits", which tags edit histories. Actually I don't particularly like this filter, but it's public and I think the issues can be generalised. This filter tests against a few /16 ranges, but we do have filters testing against smaller ranges (actually Congress only uses a really small subset of these ranges). And there's Wikipedia:Congressional staffer edits if you haven't seen it. So this seems to raise a couple of issues, and probably more than I can think of. On one hand, anyone who remembers the allegations flying around after this shitshow on Wikipedia a couple of years ago will know that just showing the (shared proxy) IP address of a perpetrator is actually quite dangerous. But on the other hand, if we tag a Congress IP, then the goal of masking 'all' IPs from almost everyone is defeated. There is also the fact that the addresses are shown in a public filter and logged publicly. Even a private filter, which on enwiki would be considered undesirable, would raise some permission issues. -- zzuuzz (talk) 13:20, 5 November 2020 (UTC)

Indeed. We've been talking about things like this in the team, but only in the light of e.g. Twitter bots, not the edit filters. Thanks for raising this! /Johan (WMF) (talk) 14:05, 5 November 2020 (UTC)
I'll just add another thought on filters. I don't want to focus too much on Congress edits, which is a perfectly valid discussion point, but we do more than just tag COIs - we actually prevent some chronic disruption with these filters, where blocks and protections are not feasible. In addition to the standing filters which target a number of specific ranges, there are times when we have to filter (usually, throttle) all IPv4 addresses (as opposed to IPv6) in some way, usually because of a vandalbot spree. This might be done for a selection of matches, or entire namespaces. And I think it goes without saying that being able to see the actual filtered IP address at some point, for 'IP checkers', will remain essential. -- zzuuzz (talk) 15:50, 5 November 2020 (UTC)
@Johan (WMF): Just to confirm that in my case it is essential to be able to continue using IP ranges in edit filters. In my wiki there is a very active LTA who uses a small number of highly dynamic IP ranges (IPs changing within a minute, possibly to a completely different range, not even within /16), but is a real pain in the ass to the community (global and ArbCom bans already applied). The only alternative to an edit filter is a filter preventing all IP users from editing, which would be too restrictive. Thus a possibility to make an edit filter specific to a few specific IP ranges is essential for fighting this specific LTA — NickK (talk) 10:29, 12 November 2020 (UTC)
Noted. /Johan (WMF) (talk) 05:55, 16 November 2020 (UTC)

Limit it to only certain jurisdictions?

When I hear "legal", I suspect this might be driven by the European GDPR and other similar laws in a few countries/regions. Why not use IP address block issuance data and publicly-available geocoding to only apply it in regions where those laws are in effect? 69.89.100.135 20:35, 6 November 2020 (UTC)

Hi, sorry for the late reply – we needed to take a proper look and consider this before we could reply. We’ve checked with Legal, and (as noted above on this talk page) the considerations that led to this project aren’t limited to one single jurisdiction. It’s also a gradual process where more and more of the world is moving in a certain direction with regards to online privacy. If we tried to limit it this way, for some international communities this would be gradual, in that over time more and more users would be affected. But periodically we’d also have some communities where a huge shift would happen. Trying to implement this piecemeal would offer a patchy and unbalanced privacy protection, but also have the complications of a constantly changing landscape to adapt to, both technically and for the communities, instead of trying to solve the problems as well as we can from the beginning.
(Thank you for being part of wanting to solve this. Appreciated.) /Johan (WMF) (talk) 07:26, 23 November 2020 (UTC)
It is possible that Wikimedia risks being out of line with some interpretations of recent and future legislation. However, my impression is that these interpretations are still being felt out - and I'm worried the wording above suggests an assumption of a single direction of travel, that cannot be halted or reversed by the actions of groups or individuals, but rather must be accepted and accommodated. WMF is not powerless here; it can go only as far as it needs to, and indeed push back on demands where it believes it has a case, while there's still hope that limits may be curtailed - in line with Wikimedia's vision for everyone to know everything.
At the same time, I appreciate rolling back GDPR/CCPA is not our purpose, and there are practical concerns: court actions or fines may disrupt Wikimedia's operations. But I think we'll still end up running into the issue of "one size does not fit all" - some editing communities will want different levels of privacy to others. What WMF does may both vastly exceed legal requirements in one place, yet be insufficient elsewhere; and creating tools makes it harder to justify not using them. Delegating to editing communities the decision on who has access (if this is actually in the offing) may only be a partial solution. GreenReaper (talk) 05:37, 3 December 2020 (UTC)