Jump to content

Talk:Incident Reporting System/Archive 1

From Meta, a Wikimedia project coordination wiki
Latest comment: 1 year ago by WBrown (WMF) in topic No word limit

An FAQ is available

Hello community, to help answer your questions and make the answers more visible for others who have similar questions, we have created an FAQ on the project page. Please consult it, and give feedback, or ask additional questions if you have more. –– STei (WMF) (talk) 06:14, 20 January 2023 (UTC)

Questions

Hi guys, I would love to answer your questions in a more private way. Do you have these questions on a form or an email where I can send the answers? Thank you and congratulations on your work! XenoF (talk) 22:17, 24 August 2022 (UTC)

We will make available Google Forms. Until then, you can also answer privately in an email to @MAna (WMF) ––– STei (WMF) (talk) 14:15, 25 August 2022 (UTC)

Use something like the Volunteer Response Team system?

I'd encourage you to look at how Volunteer Response Team works (both generally, and see commons:Commons:Volunteer Response Team) - and perhaps you could share infrastructure to avoid setting up a completely new system? Since that already exists and seems to work well, but could also do with improvements that you'd want anyway for a new system? Thanks. Mike Peel (talk) 12:57, 25 August 2022 (UTC)

It will need to be a new system. This is something specifically requested in the UCOC enforcement guidelines (project/process-agnostic, private, mediawiki-based reporting tool) Vermont 🐿️ (talk) 14:17, 25 August 2022 (UTC)
@Vermont: Mediawiki-based sounds good. Maybe it would be something that VRT could use in the future, instead of a separate system? Either way, I think there's synergies here, and experience that could be shared. Thanks. Mike Peel (talk) 16:28, 25 August 2022 (UTC)
I included in my list of suggestions below that I recommend they allow enforcement processes to have the tool forward reports to VRT queues as emails. It's necessary to have that option, though optimally the primary ticket handling method would be through MediaWiki, like with the global rename queue. Vermont 🐿️ (talk) 16:30, 25 August 2022 (UTC)
@Vermont thanks for the feedback, your suggestions have been taken. ––– STei (WMF) (talk) 15:36, 5 September 2022 (UTC)
Thank you Mike, valuable. ––– STei (WMF) (talk) 14:17, 25 August 2022 (UTC)
Hi Mike! Aishwarya here, I'm the designer for Trust and Safety Tools and working with @MAna (WMF) and @STei (WMF) on the incident reporting system. Thanks so much for your input! There is definitely synergy between the VRT and what we're doing. To echo what Vermont said, likely the VRT will receive reports through our system. We'll see! We've been doing a deep dive into the processes and systems that already exist to handle behavioral incidents and issues, VRT is one of them as well as AN/I, ArbCom, OmBuds, emergencies@, and the interactions which happen off wiki. If you don't my asking @Mike Peel, have you ever had to use the VRT, observed it in action, or been a part of it? AVardhana (WMF) (talk) 20:58, 7 September 2022 (UTC)
@AVardhana (WMF): Thanks, it's great to hear you've been looking into those different systems! I used to be an OTRS volunteer (pre-rename to VRT), and am familiar with that system, but I'm not active with it at the moment. Thanks. Mike Peel (talk) 19:37, 13 September 2022 (UTC)
Hi Mike! My apologies for this delayed response, it's been a hectic few weeks. Ah, I'm so glad to hear you've been a OTRS volunteer. We may very well wish to pick your brain about your experience at some point. Cheers. AVardhana (WMF) (talk) 23:43, 4 October 2022 (UTC)

Some thoughts

There are a lot of questions to answer and unpack here in developing this project, questions which would usually be the result of months of large-scale community discussion on a local project. And this is something with global effects, a reporting tool that should be easily accessible by both reporters and enforcement processes.

If this consultation does not effectively communicate with the people who we hope will use this tool once it's created, those people will not use it. This stage of discussions is incredibly important, and unfortunately not everyone is equipped to give informed opinions on what is needed from a reporting tool. I recommend engaging in more target consultations than village pumps and talk pages, specifically communicating with the established enforcement processes who we hope will end up using this tool. This can be part of identifying pilot wikis, but the difficulty there would be creating a centralized tool with global modularity without catering too much to the specific needs of pilot wikis.

Something like the Special:GlobalRenameQueue in terms of design and ticket management could be cool. The difficulty would be making it project-agnostic, and allowing projects to pass individual reports between each other or sharing access.

Some things such a tool would need:

  • Everything as defined in the UCOC Enforcement Guidelines, including ongoing support.
  • Opt-in from local projects with the capacity to handle this and interest in doing so
  • Ability to specify which enforcement body on a project you want to send the report to. Stewards or Meta-Wiki admins on Meta-Wiki. A project's CheckUsers or ArbCom or local admins, etc.
  • As a corollary of the above, the ability for enforcement processes receiving a report to easily forward that report to other enforcement processes, and/or allow view or reply access from those other enforcement processes. Reports should by default be visible only to the entity it is reported to, and possibly some other specific groups (U4C members, Stewards, Ombuds, T&S).
  • Users should be able to access the central reporting system and whatever ticket management results from it from any Wikimedia project without losing the centralized aspect, so that easy forwarding/changes of ticket ownership is easy.
  • Varying degrees of privacy. Allow reports to be made privately, allow them to be made privately, and allow the reporter to specify if they are okay with their private report being made public (which people handling reports can then change).
  • Local project modularity. It's important to ensure that it is globally accessible and easily forwardable/viewable between processes, but it's also important to allow local projects to decide what questions are asked (what fields are open in the report) to the reporter. This should include the option for some projects to have their reports automatically converted to an email, specifically for those who prefer VRT queues.
  • Process-agnostic. Mentioned a few times in this list, but...allow the reporter to select from a drop downs or something which enforcement process to send their report to. Processes can be listed on an opt-in basis by those processes.
  • Something to clarify that this is not for content disputes or basic things that can be solved with talk page discussions.
  • Option for enforcement processes to leave internal comments on tickets/reports not visible to the reporter
  • Reporting should be an action that shows up in Special:CheckUser
  • Options for declining, closing without comment, closing with comment, etc.
  • Email updates to reporter if preferred, otherwise on-wiki notification updates.
  • When it comes to whether IPs can report, I'd say probably the best answer is no, but...maybe allow enforcement processes to permit IP reports on a process-by-process basis?
  • Specific users to be listed with certain access levels based on userrights, AND based on lists. For example, arbcoms aren't a user right, but admins/checkusers/stewards/ombuds are userrights.

This is a very significant tool, with a lot of idiosyncracies to be aware of and work with. There are many community-maintained tools that engage in report management like this, and so many more community processes, and making a centralized reporting system is a difficult task. Best, Vermont 🐿️ (talk) 15:39, 25 August 2022 (UTC)

I will have additional thoughts but I wanted to wholeheartedly endorse Vermont's points here in the interim. Best, KevinL (aka L235 · t) 05:49, 26 August 2022 (UTC)
Thanks for this, super interesting and lots to process! I'm curious in particular about the varying degrees about privacy and the thinking behind it. What are we looking to solve here? Is it about user preference/comfort or is it more about different types of harassment cases would require different approaches when it comes to privacy? MAna (WMF) (talk) 17:20, 26 August 2022 (UTC)
MAna, thank you for your reply! How we currently handle privacy is quite complicated. It depends on the reporter, it depends on the violation being reported, it depends on a near infinite amount of other mitigating factors. And it also varies significantly between enforcement processes. A focus on allowing enforcement processes to customize how they accept reports is necessary for those enforcement processes to use this tool.
In my view, at least these few options are needed:
  • For the reporter
    • Option to post the initial report publicly or send privately
    • If privately, option to state their comfort level with it being made public
  • For the enforcement process
    • Ability to limit reporter's options, say to only accept public reports, or only private reports.
    • Option to make individual private reports public, with consent of the reporter. Note that some reports may by necessity go unactioned if it's something that would be publicly addressed but cannot because of the reporter's preference for privacy. This is something for individual enforcement processes to weigh.
    • Option to make individual public reports private
    • Whether to allow anonymous reports (basically everyone's going to say no but it could be beneficial to have the option here)
It's an incredibly difficult task to make a reporting structure that Wikimedia enforcement processes are going to want to use, and will require inordinate levels of customization. And frequent communication with people involved in those enforcement processes is needed; if not, no one's going to want to use this system. PIRS not only has to be as good as existing tools, but better to warrant a shift. Vermont 🐿️ (talk) 19:34, 26 August 2022 (UTC)
MAna, a question: I noticed that there doesn't seem to be a part of the 2-step plan where designs for this (whether it be wireframes, flow charts, feature lists, etc.) go through any sort of community review prior to the actual software being made. This is prone to missing needed features that may be very difficult to add post-implementation, and could make the effort put into the project moot. Special:Investigate, for example, is not widely used to a large extent because of insufficient iterations and discussions with the people expected to use the tool. Is there the possibility of changing the timeline or plan to account for this? Vermont 🐿️ (talk) 04:22, 27 August 2022 (UTC)
We definitely want to have designs go through some sort of community discussion/feedback. It is not specifically listed but we are thinking about it as part of Phase 1. MAna (WMF) (talk) 17:04, 29 August 2022 (UTC)
Hello @Vermont! My name's Aishwarya (she/her), and I'm the designer for the Trust and Safety Tools team. I was reading through your Spam and abuse techniques documentation earlier today on office wiki. Thanks so much for your extensive contribution to this effort, this early on! +1 to your recommendation to consult and work very closely with the people who will be adopting this system i.e. the responders. As for sharing design artifacts and sketches as we go, I'll absolutely be doing that. Like Madalina has said above, regularly sharing will be part of Phase 1, not just Phase 2. In comparison to you and others on this page, I'm quite new to the movement and have a lot to learn about communicating with the community, posting on meta wiki, etc... but I'm trying to learn quickly and getting the hang of it :) bear with me please! AVardhana (WMF) (talk) 21:51, 7 September 2022 (UTC)
Heh...what spam and abuse techniques page? wink. And that sounds good, happy to hear there'll be regular sharing of designs. Welcome to the movement! :D Vermont 🐿️ (talk) 00:22, 8 September 2022 (UTC)
Woohoo! Thank you, excited to be here :) AVardhana (WMF) (talk) 00:03, 5 October 2022 (UTC)
I also want to add something about question 5. Many established enforcement processes accept reports via email, the ability to accept those are not really a problem. The difficulty is that many discussions are required by community consensus to take place publicly. So, someone may send in a private email to handle a content dispute that should be reported to the relevant public noticeboard...but will be told that their options are either to report it publicly themselves or to disengage from the dispute. This tool cannot change those practices, that is a matter of community policy/guidelines. Vermont 🐿️ (talk) 00:19, 30 August 2022 (UTC)
I came here specifically to mention the point being discussed above - the vast majority of issues need to be handled publicly. I don't intend to minimize the exceptional cases, but the primary problems on-wiki are content disputes (which need to be evaluated by the community and handled publicly), or arguments over interpretation and application of community policies on content&behavior (which need to be evaluated by the community and handled publicly), or whining about the other editors fighting argument (which almost always needs to be handled by the community and handled publicly). Things may get heated, but generally it's a standard public community evaluation of the content and/or policies at issue. It is also notable that in significant minority of complaints, the person who filed the complaint may have behaved abusively and caused the disruption themselves. Sometimes the complainant needs to be sanctioned, per EN:WP:BOOMERANG. If you go to the police to file a complaint that a bus-driver drove away with your forgotten drug-bag, the police will listen to your complaint and then arrest you.
I would suggest talking to English Arbcom for expert information on kinds of cases need to be handled privately. The main reasons that comes to mind are that anything involving off-wiki identities or off-wiki evidence or off-wiki behavior need to be handled in private.
People in content disputes commonly feel their opponents deserve to be eliminated from the argument. Any channel apparently allowing easy one-sided covert attack against opponents is going to be highly attractive. Any reporting channel needs to clearly state that covert reporting options offer much narrower possibility for action, and it needs to explain what kinds of cases it can handle. I expect it will still receive an excessive number of inappropriate or abusive submissions. The interface should also inform editors that EN:WP:RFC is a standard first-line option for routine disputes. Alsee (talk) 00:03, 1 September 2022 (UTC)
Thanks @Alsee for your input here! We'll absolutely be talking to English ArbCom and ideally other ArbComs as well. As for following community policies and publicly discussing + resolving content disputes, we've been reading research that's been done on AN/I and this report from 2017 on how confident Admins feel in dealing with harassment, to better understand what does work about our current processes and what doesn't. Fear of reporting due to the boomerang effect, the phenomena you've linked above, was tied with 'toxicity' and 'complex issues' as the second most common reason people don't report harassment on AN/I in the AN/I survey from 2017. There is a lot of reading and understanding we're doing to catch up on the research and ideation that's been happening about an incident reporting system since 2013. AVardhana (WMF) (talk) 22:11, 7 September 2022 (UTC)
@AVardhana (WMF) sorry for the three week interval, I meant to respond but lost track of the page. I hear you about so much reading, and this is just one of many comparably large topics going on at any given time. Wikipedia is pretty unique, a lot of this stuff can be hard to explain or understand if you haven't lived it. Many core community members have been here 10 or 20 years. Formal community decision making processes may represent many hundreds of years of collective experience. It would be easier if the Foundation were more willing to rely on community expertise for certain specifications and decisions.
Fear of reporting due to the boomerang effect, the phenomena you've linked above, was tied with 'toxicity' and 'complex issues' as the second most common reason people don't report harassment on AN/I
If you check the survey figures and percentages, that was 8 people. Less than 6% of those surveyed. (136 surveyed, 62 responded in that portion, 12.9% cite Boomerang in that portion = 8.)
Also note that the survey population was a random selection of people who had been active on AN/I. That's like blindly surveying people coming out of a court - some survey responses will come from convicted criminals. Some significant portion of the survey responses were coming from the guilty parties.
Boomerang is only a problem if it is discouraging some significant number of people from filing valid AN/I cases. The survey only determined that number was between zero and eight, and I expect closer to zero. Those Boomerang complaints likely came from guilty parties, and they were likely discouraged from filing frivolous AN/I reports against opponents in routine content disputes. Alsee (talk) 13:24, 30 September 2022 (UTC)
Hi @Alsee! No worries about the delay and I apologize on my end as well for my late response. It's been a hectic week. So... first of all thanks for your analysis! I agree that the guilty will likely cite the Boomerang effect as a deterrent for reporting on AN/I. I also went back and re-read your initial point which is that the "primary problems on-wiki are content disputes...or arguments over interpretation and application of community policies on content&behavior... or whining about the other editors fighting argument." Agreed there as well. What @MAna (WMF) and I are trying to understand is, what about when the harassment is aggressive, personal (so NOT a content dispute), and possibly persistent (although a one time offense is still harmful)? If one Wikipedian genuinely feels unsafe because of another Wikipedia's behavior/words, how might we allow them to report that interaction OR that person, without fear of retaliation?
Also there was one more survey done by WMF in 2021 on why people who have experienced harassment don't report it, and they cite "fear of backlash or reprisal". I'm curious what you make of that data? Thanks! AVardhana (WMF) (talk) 00:02, 5 October 2022 (UTC)
Your question essentially starts with a victim of harassment, and asks how to help. I think it will help to instead start at the beginning of our process. A report comes in. At this point it is merely an allegation. That allegation may be valid or invalid.
The question we are dealing with is which cases can and do escalate to a "secret court"?
For as long as the WMF has existed, up until recently, there was an established and accepted scope for "severe" cases. This included any sort of threat, offwiki abuse, any legal matter, and similar severe issues. These cases were handed in secret. There is nothing to discuss here, everyone agrees on these cases.
In 2019 the WMF attempted to expand the scope of "secret court" to include not-severe cases. That attempt exploded in a major crisis. The WMF is again seeking to expand the scope of secret court to include not-severe cases, but more carefully this time. However that will repeat the crisis if the crisis-cause is repeated.
The question is: Which cases can escalate to secret court?
Answering that question requires understanding the cause of the original crisis.
In severe cases, such as threats or offwiki abuse, the secret court can pass judgment based on the threat or based on the offwiki abuse or similar issue. No problem.
In not-severe cases, there is no threat and no offwiki issues etc. A complaint might include an allegation of harassment or bullying. However fundamentally we have one person complaining about another person's edits. And here is where we run into a problem:
It is appropriate to delete someone's content if that content does not comply with our content policies. It is appropriate to criticize an editor and their work if their work does not comply with our policies. It is appropriate to "stalk" an editor, following them to various pages reverting their edits or arguing or criticizing, if that person has been making a pattern of inappropriate edits. In this situation, the person making the bad edits may file an invalid harassment accusation.
The problem is that you can't evaluate those harassment claims without first passing judgement on the underlying content and the community-policy issues. Was the alleged harasser correctly targeting this person's edits? Are the arguments and criticisms valid? Was the "stalking" a legitimate effort to clean up a pattern of bad edits? Was the accuser primarily responsible for provoking the situation with their bad edits, bad arguments, or abusive tactics?
We had a crisis because the WMF had a secret court passing judgment on which content it liked, passing judgement on which edits it agreed with. That's a no-go. If you have a harassment allegation between a pro-Trump editor and an anti-Trump editor, we can't have the WMF or any other secret court picking which edits it likes or dislikes. We can't have a secret court passing secret judgements on the "correct" way our articles should be written.
A threat is a threat and offwiki abuse is offwiki abuse, regardless of any related issues. However dealing with another editor "aggressively" and "persistently" takes on an entirely different meaning if someone is legitimately cleaning up abusive content. Even "too aggressive" becomes a lesser infraction if someone was doing legitimate cleanup work and they were understandably provoked by abusive behavior from the other person.
If you want to expand the scope for secret cases, you need to define the expanded criteria. It must also be possible to assess guilt or innocence independently of the underlying content and policy issues. The kinds of not-severe cases the WMF would like to shift into secrecy are just too tightly dependent on the underlying content and policy evaluation. Alsee (talk) 17:02, 5 October 2022 (UTC)

Process and Feedback

Hello - I'm going to split my thoughts on the process and some comments in response to the questions asked.

Process

My thoughts here have some, but not total, overlap with Vermont's well elucidated section above.

  • I would have noted that tool's scope (even for consulting) might have been better written after we'd seen the amendments from the Revision Committee, particularly the case with regard to the right to privacy/right to be heard balance aspect.
  • I would support Vermont's statement that specific consultation with functionary groups (more specifically, distinct reaching out to Stewards, arbs, and CUOS) is needed
  • But I would also add that while they handle the most complex cases, by numbers, admins/eliminators handle the most by number, so I wouldn't deprioritise getting feedback from that source.
  • Just a stress on the need to have genuinely responsive consultation - I'd suggest along the lines of the discussion tools process. Otherwise it risks blowback in an area that can least afford it and the loss of a potentially very helpful tool - not to mention the wastage of the team's time!

Thoughts

  • This tool is going to need a simply insane amount of customisation capability to be able to cover both all the local projects, and the Steward actions.
  • Privacy - currently this tool is named the "Private Incident Reporting System", which I think must change. This tool can enable such, but there appears to be an anticipation that it will be such. The revisions to the Enforcement Guidelines are likely to significantly reduce the required level of privacy in most cases. Thus we have the first customisation point - how much privacy those raising conduct issues will have through the system. Some projects might want a high level, but others will be using the status quo + UCOC (whichever is the higher).
  • The tool can't just default to a high-level of privacy then, it needs project by project calibration, or a low-level default if that isn't possible.
  • Conduct pathways - one of the best ways this tool can be helpful is to ease, automate, and guide conduct submissions from newer editors. En-wiki, for example, has numerous conduct fora (AIV, ANI, AN, AN3, ArbCom, etc etc). A platform that can take them step by step (complaint, evidence, requested remedy, accused notification etc) would smooth many things. Those need not be all written by the team, but it needs to be readily done so by a project's editors if not.
  • The platform should then either post it on the board (for public cases) or covert to email for those that the project feels should be private.
  • Many conduct issues (especially the most common, such as vandalism) lead to a warning. Having any vandalism identified routed to either a board or an admin would lead to a large expenditure of time and lots of "insufficient prior warnings, editor not blocked but warned" outcomes. Whether this might be resolved by just excluding lower level issues such as that or by encouraging the editor to go issue a warning I'm not sure, but it should be considered.

I will likely return to add a few more specific answers in response to the team's specific questions, but the above were my primary concerns or thoughts. Please feel free to ping me to clarify or talk! :) Nosebagbear (talk) 20:44, 25 August 2022 (UTC)

The tool should by default be as private as email, imo, unless the handling enforcement process or reporter wants otherwise. This is primarily for issues that require sensitivity, this isn't for content disputes or most things that can go on public noticeboards. And yep, enforcement processes should be able to have subqueues of some sort, like your conduct pathways sort here. Like we do in VRT. Vermont 🐿️ (talk) 21:22, 25 August 2022 (UTC)
Conduct pathways - are you thinking about something like a step-by-step guide of what to do, where to go, what kind of information to submit? MAna (WMF) (talk) 17:39, 26 August 2022 (UTC)

Comments from zzuuzz@enwiki

Hello. I've got to admit I'm unclear about this project's scope, even the UCoC's scope, and I'm nowhere near having any useful questions or answers. However, in the spirit of answering some of the questions above, I'll just respond with the following, from my own perspective as an enwiki admin and checkuser who has read the UCoC.
The vast majority of UCoC violations can be classed as generic vandalism. It's pervasive. This is probably followed by POV-pushing, including paid editing and spam. Hey, you did ask. Following that, the most common violations that I see are threats of violence, outing, persistent defamation, harassment, hate speech, and other abuse from sockpuppets and long term abusers. To respond to any of this we generally warn, report, and block. There are stronger measures, such as edit filters, and lesser approaches, such as topic bans, dispute resolution and just having a chat, but most of anything that's inappropriate, especially grossly or repeatedly inappropriate, will at least result in a block. We're really not shy about blocking.
Violations are reported to admins through various noticeboards and talk pages. occasionally privately through various routes, and are also frequently processed without any report (ie by admins at large). I wouldn't say the variety of venues or identifying abuse is generally a problem, and there are lots of signposts to, eg the help desks, if you look for them. Make a report to someone somewhere, or be abusive, and someone will probably notice it.
Personally, I would like to see the WMF work better with everyday admins, with ISPs to help them enforce their terms, and with legal processes against unwelcome criminal behaviour, including outside the US. For me then, I would view such a system as a 2-way pipeline to T&S or other departments to build cases to take action in the real world against people who are already blocked or banned. That's where UCoC violations and 'unsafe environment' are really happening. We currently have a pipeline through email addresses, but in my experience they typically resemble a black hole. It's a type of interaction that I'll rarely bother with, on a par with writing a personal blog that no one reads. I read the description of this project and wonder what proportion of reports are going to be generic vandalism and spam that could and should have been reported in public so it's not restricted to an unnecessarily small crowd. If you're not careful, I think you're at risk of being overloaded. -- zzuuzz (talk) 01:02, 28 August 2022 (UTC)

Please respect Wikimedia community volunteer attention and time

I would like to voice my concern that the Wikimedia Foundation has a history of proposing projects like this, making significant investments, then abandoning the projects without closing the project with community stakeholders. This cycle is a major disruption to Wikimedia community planning, and creates a conflict wherein Wikimedia Foundation staff get paid regardless while community members have no power or resources to advocate for themselves to get a useable final product delivered.

Can you please do the following:

  1. Publicly disclose the labor or resource investment which WMF will put into this
  2. Publicly give an estimate of how much volunteer labor you expect to recruit to advance this
  3. Commit to publishing at least annual updates on this project

I am advocating for the Wikimedia community here and requesting sympathy. When these projects fail as they have in the past, the Wikimedia community is left with broken promises when the WMF committed help. This has also caused wasted time as the WMF discouraged the community from using resources to seek other solutions in favor of advancing the staff one. If you are going to do this, make strong commitments to follow through. Also, Wikimedia community labor is not an endless resource to be recruited! I know that our community members generously volunteer, but they do this because of trust in Wikipedia and not because they owe it to a paid staff cycle of tool development! Do not violate that trust! Please disclose in advance how many volunteer hours you will be recruiting and from which communities, and give explicit published credit to communities which source volunteers to support this project!

Past discussions include the 2014 Grants:IdeaLab/Centralised harassment reporting and referral service and the 2019 Community health initiative/User reporting system consultation 2019. In addition to these I would estimate that there were 5 other seemingly major Wikimedia Foundation initiatives which recruited community members to interviews and focus groups to address this challenge, but either were not documented or I do not know where to find records.

I appreciate and support this project. Please proceed ethically and in a socially appropriate way! Bluerasberry (talk) 16:11, 29 August 2022 (UTC)

This is going beyond the current project, but Bluerasberry raised a point on general WMF project development which is especially significant here. The Foundation has poor institutional memory, forgetting lessons learned from previous projects. In particular the Foundation cooks up shiny new projects, forgetting that it lacks the capacity to handle the customization required for ~800 wikis. (NewPagePatrol is particularly notable.) 800 wikis of customization distinctly applies here. The Foundation also forgets that "finished" products need some level of on-going support. Once a current shiny-object initiative is wrapped up, the code ends up orphaned. Once a project is "done" it becomes a monumental task for the community to get anything fixed, anything completed, anything enhanced, or anything updated. That also is sure to apply here.

The Foundation cannot build 800 different reporting systems for 800 different communities. It needs to build a system allowing *us* to specify the various pathways available on any given wiki, along with specifying the destination and workflow for each pathway, along with defining various local-language messages and dialogs for those pathways. If you are going to build that kind of functionality you really shouldn't be building a "reporting system" at all. You should instead be building a generic system, one that enables us to specify the local reporting system AND which we can use for our countless other workflows as well.

Notably, the Foundation was once did plan on building something kinda like that. See Workflows. The particular implementation considered at the time was badly off target, it was planned with basically zero community input. In particular key parts of the idea were backwards, seeking to define a workflow in terms of existing abilities that would be constrained, removed, or impossible for a defined workflow. Such a project would need to be re-envisioned from scratch with far more community involvement, but the underlying idea was promising. Alsee (talk) 11:04, 1 September 2022 (UTC)
@Alsee, your points on a generic system (with pathways available on any given wiki) that works for all and providing on-going support once the project has been completed, have been noted. ––– STei (WMF) (talk) 16:11, 5 September 2022 (UTC)
@Bluerasberry thanks for the feedback. My colleague @MAna (WMF) may have an additional response but generally, if the community focuses on answering the questions we shared above the talk page and the interviews we have planned are successful, we will know the product direction and we can get closer to knowing the answers to questions 1 & 2 you raised. Here is some information on which stage of this project we are in.––– STei (WMF) (talk) 15:59, 5 September 2022 (UTC)

Get data to third-party researchers

a chart to illustrate a process for managing complaints
a chart to illustrate a process for managing complaints

It is more important that we get data about complaints than actually addressing the complaints themselves. This is because we are currently unable to study the problem well. If we have data to study the problem then we will find the solution; if instead we proceed without collecting data, whatever justice we are able to provide will only be a short term fix for a problem we do not understand.

Some data about those complaints should go to a trusted neutral third party - not Wikimedia Foundation, and not volunteers, but probably either a locked dataset which cannot experience tampering, or a university researcher. The reason we need this data is because we do not know the scope, depth, and variety of harassment, and also because the Wikimedia community has low trust in reporting based on a generation of inadequate processes. I want to specifically say that Wikimedia community members will report Wikimedia Foundation staff for harassment, and that in the past, Wikimedia Foundation staff have been accused of abusing power to dismiss such complaints. Because of this, the design of the system should eliminate all speculation that Wikimedia Foundation staff can pressure or influence the system to avoid accusations. If there were a trusted system in place, then I think much harassment would be prevented.

About the proposal pictured here, we need three things: first everyone needs to have trust that their complaint is received. When they submit, they need a receipt that their report exists. The reporting system is unrelated to any justice system, so we do not need to create justice, but rather give referrals to justice processes where they exist elsewhere either on wiki or off as available. Finally most important is that deidentified data about complaints needs to be available to third-party researchers who can report trends and outcomes, even if they are embarrassing.

Thanks - Bluerasberry (talk) 16:25, 29 August 2022 (UTC)

I'm not sure that a third party handling the data will resolve the problem, as there isn't a "trusted third party" for data analysis that has both widespread community and WMF trust.
While third party analysis has been helpful and done well for things like ACTRIAL, there they were able to "show their working", which presumably wouldn't be able to be done at a sufficiently broad scope for this type of material Nosebagbear (talk) 16:50, 1 September 2022 (UTC)
@Nosebagbear: I do not think you have thought this through. To give an example, the Facebook research dataset is with en:Social Science One, which is an organization where Facebook can send data for university and industry examination. We could do something similar with Wikipedia. Examination of social media data happens at every university in the world. There are dozens of research directions for this and ways to manage both private and public data.
I am not up to date on which of the many independent research platforms are hottest and most relevant, but in 2014 I was representing Wikimedia LGBT+ when I tested Heartmob for use by the wiki community. This is a third-party platform where anyone experiencing harassment on any site can report their problem, and as a third party, that site keeps the complaint so that there is no conflict of interest in sites not keeping complaints against themselves. Technologically these platforms are not difficult to set up; like you said, widespread trust is the expensive part. I do not think that a key aspect of attaining trust is 1) insisting on developing tech from scratch in the WMF or 2) making WMF staff the unchecked en:Honest broker.
I do not think it is realistic or reasonable that the WMF can manage Wikimedia community complaints alone for any amount of money or resources. To compare twitter, the output of independent researchers publishing analyses is beyond anyone being able to read, with hundreds of papers yearly (see Scholia profile on twitter). Likewise, even private complaints submitted confidentially will need analysis against public data from all Wikimedia projects which anyone can analyze. There is not even dedicated staff maintaining Research:Index, so have not even started to mobilize university research on Wikimedia datasets in an organized way. Although Wikimedia datasets are very well documented in comparison to other datasets, the documentation is still only beginning and mostly inaccessible to typical researchers. Everyone still chooses twitter analysis; we could get to the point where people seriously look at Wikipedia also. If there were several full time staff on this there would be plenty to do forever. There are options here. This is not a dead end. Bluerasberry (talk) 12:05, 13 September 2022 (UTC)
@Bluerasberry - while your comments all make sense as to the general complexity of doing wikimedia research and possibilities for improving it, afaict, none of it directly covers how you would either find or create a 3rd party that had widespread trust from both parties, to the tune of accepting its findings without being able to trace its workings down to specific details and the relevant context for each. Wikipedians hate having to take things on trust - having a group we trust is only moderately hard. Having a group we trust without the ability to verify every aspect, much harder. Nosebagbear (talk) 18:55, 13 September 2022 (UTC)
@Nosebagbear: Any random research group from any typical research university anywhere in the world is much better equipped and much more trustworthy to handle the data than 1) crowdsourced wiki volunteers or 2) staff of the Wikimedia Foundation in research.wikimedia.org. Research universities obviously have many tens of millions of dollars of investment in infrastructure to do this beyond what the WMF has. If you want to measure trust, we can do it in money, and say that we can only trust an organization which has an investment history of US$100,000,000 dollars in the space of sensitive social data management and analysis. So many universities meet that criteria. Can you suggest a better measurement of trustworthiness than financial investment in this field of research? Already 90% of Wikimedia Research comes from universities, not from the Wikimedia Foundation, and if we wanted, we could both increase the amount of that research and its quality by doing more to document and share Wikimedia datasets. If someone wants even more trust, we can even arrange for research teams at multiple universities to collaborate in examining data for sensitive projects. In that case they all put their reputations on the line to do projects right, and it still costs less to do it at universities as compared to in house at the WMF. That would be completely normal for university research. What can you imagine that would be more worthy of trust, at lower cost, that uses existing infrastructure and social norms?
BTW - this is an amateur project, but I gave a go at doing research in this space at my university. See Research:Automatic Detection of Online Abuse. There could be dozens of research teams at many universities every year doing variations of "detect abuse:" projects. Bluerasberry (talk) 20:55, 13 September 2022 (UTC)
Yes - I don't give trust on the basis of prior financial investment. I give it on a history that I have been able to verify, so that when I can't, I can take it on faith that the person or organisation hasn't suddenly lost their sense of judgement. But doing that with conduct is difficult - you need a conduct dataset that is Wikimedian, so the research is sufficiently relevant for the purpose of building experiential trust, but still open for full review. Sufficiently replicated, you can then start moving towards the full process here. But I don't know whether such a process would be either possible or, more accurately, feasible. Nosebagbear (talk) 21:17, 13 September 2022 (UTC)
@Nosebagbear: Can you briefly tell me some characteristics of what you imagine as the most trustworthy, reliable, and valid research team? Also, to what extent do you imagine that Wikimedia Research is that team? For comparison, I was suggesting the upper tier of conventional scholarly research. If I understand correctly, you are expecting something significantly more complex. Bluerasberry (talk) 21:40, 13 September 2022 (UTC)
That rather varies depending on the topic - assuming it's that of the premise of this discussion, it probably would be one along the lines you've mooted, but one that has demonstrated their research in the specific field of Wikimedia sensitive conduct issues in a way that has allowed verification of both their process and their conclusions from the original data, likely in multiple projects/studies.
There's lots of 3rd party academic research on Wikimedia and its various facets, but sometimes the process of their research is flawed, and somewhat more often the premises and conclusions are either flawed or missing. But for those studies that don't have a data basis that's not included or available on-wiki, we can go do our own checking. But for the nature of sensitive conduct research, that would (at least ultimately) not be possible to do. Hence a higher burden for demonstrable near-flawless judgement is necessary. Nosebagbear (talk) 21:50, 13 September 2022 (UTC)
@Nosebagbear: I get it - much scholarly research featuring Wikimedia projects has obvious errors in it due to the researchers not getting reviewed by someone in the community, and their making fundamental wrong assumptions. Has anyone in the Wikimedia movement ever performed "demonstrable near-flawless judgement" for "data basis that's not included or available on-wiki"? If this has happened, then among ~3000 Wikimedia research papers, I would guess not more than 10 times or perhaps 0 times. What are you imagining about this skill set?
Are you imagining that it is grown in house? Are you imagining with staff of the Wikimedia Foundation, somehow wiki volunteers, or ?? Bluerasberry (talk) 22:56, 13 September 2022 (UTC)
Hi @Bluerasberry, certainly, I'm not indicating that the Foundation's research reaches that level of accuracy. I don't know if you read their output as a matter of course, but there was a piece of work on the UCOC where I raised that it was flawed to be providing recommendations to the UCOC committee (I think that would have been the phase 2 com), when the research had only approached one half of the issue.
While I'm not aware of any volunteers doing research at this level that's mainly because I don't know any doing research - I do have that level of confidence in certain individuals and their judgement doing various tasks with non-public data. My verbose answers over the last few days are mainly with regard to your proposal of an external research group that can do all the various bits you indicate could/should be done and are flawed at the moment (either WMF or community). I've not intended to propose that that level of trust could be acquired by a specific alternate method - only that while the in-house mechanisms are certainly flawed, I disagreed that an external organisation would do any better, unless the community would be able to check using the non-deidentified data against their process and conclusions Nosebagbear (talk) 14:03, 14 September 2022 (UTC)
@Nosebagbear: Yeah - I agree with all of that. Yes, it is unusual and different that Wikimedia projects require high community participation even in research, when other comparable projects like Twitter and Facebook can send their problems to academic research teams who operate independently of community oversight. I think you are right on that point - we need to grow community expertise to have in-community, peer review from actual Wikimedia community editors and functionaries to oversee research. Without that we will lack validity and also there will be social and ethical transgressions.
I am not sure how that kind of review will look, but some research models which exist for this kind of oversight are en:Institutional review board, en:Community advisory board, en:Ethics committee, and all the en:Category:Citizen science models.
I still am not clear on how far you are suggesting that we develop community research infrastructure, but you are correct that we have lower capacity than needed, and I was not thinking of this much at all.
Besides your emphasis on getting trusted, knowledgeable, in-community people intimately involved in the research, and having mechanisms to double-check research results in the community in a secure way (but with more restrictions than the maximum trustholders at the top), any other ideas? Bluerasberry (talk) 00:22, 15 September 2022 (UTC)
Fair points - I guess there is the usual (that is, all research has this issue, not specifically us) aspect of seeing whether the research holds up, or whether the recommendations they provide are adopted and, if so, are they effective.
There's no great answer to that, as obviously every hour spent on it is an hour new research isn't being done in, and even harder on a volunteer basis, but if you think of some ideas to at least grab the low-hanging fruit? Nosebagbear (talk) 09:39, 15 September 2022 (UTC)
@Bluerasberry "It is more important that we get data about complaints than actually addressing the complaints themselves. This is because we are currently unable to study the problem well." Do you think Wikipedia is worse than elsewhere? Has it got better? And do you think that it is mainly non-good faith/very new editors? And are the issues on user talk/user/article talk/edit comments? Wakelamp (talk) 09:54, 5 January 2023 (UTC)

┌─────────────────────────────────┘
@Wakelamp: Some thoughts:

  • Error to treat incident reporting as technical issue This is a social issue which needs technical support, and not a technical issue which needs social support. The research review and social surveying should come first, because even if someone presents a working technical solution, its effectiveness is always going to be in question if credit for the design process does not go to the community. I am saying this in response to "Do you think Wikipedia is worse than elsewhere?" The historical situation is that Wikipedia identified peer to peer misconduct first around 2010-2013, while at that same time the representatives of other platforms like Facebook and Twitter where publicly denying that 1) the existing misconduct on their platforms was significant and 2) that misconduct in their platform was their problem to recognize. The available evidence is that Wikipedia has the worst conduct of any platform in the world, but the reason for this is that Wikipedia is the only platform with a community governance that values transparency. We have the only misconduct records, because the leaders of other platforms actively destroyed misconduct records, lied about doing so, then lied to say that problems never existed. We need social science and community discussion to tell our own story of why we are different, and also to publish comparisons of ourselves versus big tech rather than letting big tech sponsor journalism to tell their story about this without community voices to counterbalance their world view. Also WMF as an institution lied about misconduct too against the wishes of the community, not because of any individual staff's shortcoming but because lying in conflict of interest is what corporations do by their nature. That conflict of interest makes all WMF staff unreliable to speak to misconduct, because they are paid to design systems to shift blame away from the corporation as if that were the only way to design the user experience. There is a conflict of interest here because the WMF is deeply ashamed of the problem when the community feels no such shame, and feels that some amount of misconduct is to be expected, and wants to openly discuss it.
  • Better to convert to specialized statistics than ask generally about all misconduct "And do you think that it is mainly non-good faith/very new editors?" This is not the best direction for conversation. The better direction is asking for statistics and classification of the nature of the problem. Asking for all-Wikimedia stats will not be as useful as asking for stats by language community, topic, and other circumstance. That amount of complaints is beyond human comprehension without automated classification of reports and data visualization. Since you asked, the most common harassment complaint is from people whom wiki editors correctly assess as publishing inappropriate content (usually due to no citations or being promotional) and who get their content deleted. From the wiki quality control perspective that deletion is wanted, but still many thousands of people a year - usually the very new editors you mentioned - feel the emotions of being personally attacked and harassed when wiki editors correctly remove their unsuitable content submissions, so any incident reporting system is going to have to take such reports as well even though our likely response is going to be that the person reporting got the response we wanted them to have. For some language communities and for some topics, there will be thousands of misconduct complaints against new users. Still significantly, for other languages and other topics, there will be merely 100s of misconduct complaints about, for example, political propaganda. If we get only 100 propaganda complaints in a language of India, for example, that may not be many complaints globally, but it could be enough of the same complaint in the same place to merit a community reaction. Also the WMF should not be deciding when there have been enough complaints to merit that reaction; community needs data so that community can make that decision.
  • Most people who want to report currently cannot, so we should make it easier "And are the issues on user talk/user/article talk/edit comments" Not exactly, and also, it may not matter where people report right now because those channels are probably not where we want them to report in the future. I am guessing that we should probably get on the order of 5,000 misconduct reports a year depending on how difficult we make it for someone to issue a report. If it is too hard we may only get hundreds; if we invite people to complain then perhaps we can get 20,000. I am basing this on the number of complaints we currently get in English Wiki admin forums times 2 for the rest of the world, plus VRT emails, plus multiplying all this by 10 because we intentionally make it hard for anyone to complain at all. I imagine three categories of misconduct reports: new users who do not know what is happening but feel harassed; wiki editors with experience who are willing to engage in wiki community governance bureaucracy; and people with cases where they request a personal human judgement which is hard to automate. Human judgment cases will include someone asking us to delete public cited information for safety concerns, accusations of misconduct by wiki community members in positions of power, and accusations of misconduct by WMF staff. I want to emphasize that last point - either WMF inappropriately destroys records of misconduct complaints against their staff because of shame and disbelief that accusations against staff are worth addressing, or they are impossibly secretive and obtuse about their system of addressing such complaints. If the reporting system cannot pull out accusations against WMF staff then something has gone wrong, because at the very least, the clueless public issues lots of bogus accusations against WMF staff whenever someone deletes their wiki promotional submissions.
Thanks for asking. Bluerasberry (talk) 17:55, 5 January 2023 (UTC)
Lane Rasberry presents Wikimedia Private Incident Reporting System for misconduct

My Comments

https://altilunium.my.id/p/PIRS Rtnf (talk) 23:19, 29 August 2022 (UTC)

@Rtnf thank you. Your point about human resource has been well noted. Comments like one from @Bluerasberry are equally concerned about the labor/human resource/hours that will go into the system and the cases. ––– STei (WMF) (talk) 16:18, 5 September 2022 (UTC)

Scope / Universal_Code_of_Conduct

I find it worrying that Universal Code of Conduct isn't mentioned until nearly the end of the document. In particular it's not mentioned in the background and focus sections. Unless the scope is clear to all participants, I'm sure we can expect confusion, non-optimal feedback and dashed expectations. For example:

  • Is PIRS meant to be a reporting system covering all Universal Code of Conduct violations? Or just 'harassment and other forms of abuse'?
  • Is PIRS meant to just cover Universal Code of Conduct issues or other issues that need to be reported and handled privately (Security breaches; legal issues, etc)?
  • Presumably there'll be a separate reporting method for issues involving those with access to PRIS internals?

Looking forward to clarity. Stuartyeates (talk) 09:33, 30 August 2022 (UTC)

An excellent point @Stuartyeates - I've defaulted to a fairly broad guess at their scope, Vermont has taken a (probably more reasonable) narrower scope, just to name two. Nosebagbear (talk) 16:10, 30 August 2022 (UTC)
Thanks for bringing this up! I think one of the goals of this conversation is to determine (and align on) scope. MAna (WMF) (talk) 13:55, 5 September 2022 (UTC)

API

As a tool developer, if PIRS became a thing I think having an API that tool developers could use would be excellent. Many recent changes patrollers may find T&S issues first and want a way to quickly report them other than through email (which is how I've implemented something like this in the past). Ed6767 (talk) 12:24, 1 September 2022 (UTC)

+1. Frostly (talk) 22:57, 16 October 2023 (UTC)

An update on Private Incident Reporting System project

Hello everyone, we have an update about the Private Incident Reporting System following a review of the feedback we got from the community on the discussion pages and interviews. Please read it and give feedback on the talk page of the timeline and updates page.

We are looking forward to hearing from you. ––– STei (WMF) (talk) 15:53, 10 November 2022 (UTC)

I think it could be better to keep all the feedback on this page. Sometimes projects like this have several talk pages and it's hard to keep track of the conversations. The talk page of the update is still empty, so it could simply be redirected to here. kyykaarme (talk) 06:57, 5 December 2022 (UTC)

Can anyone report any harassment data yet?

  • How many harassment reports has the team analyzed?
  • What data model do you have for considering harassment reports?
  • How are these reports characterized?
  • How is your model validated?
  • How many community / expert interviews have you done?
  • What timeline do you have for milestones?
  • Have you made a decision about whether you will publish your process as research in a peer reviewed journal?

Bluerasberry (talk) 15:05, 12 December 2022 (UTC)

The PIRS project page has information on what we know, what we don’t, timelines and so on.
Also, we have published a review about the literature we found on harassment on Wikipedia. Please look at that too.
If the information is not enough, or not presented in an optimised manner, please ping me. –– STei (WMF) (talk) 17:50, 13 December 2022 (UTC)
@STei (WMF) How many incidents are reported per year to WMF? What is the rate per wiki edits/talk? b and how does that compare to Facebook/Reddit? Wakelamp (talk) 08:00, 5 January 2023 (UTC)

Da ich es gerade übersetzt habe, und darauf angesprochen wurde...

... was gehört hier wie zusammen? Ist es Reporting System für Private Incidents oder ist es ein Incident Reporting System, das Private ist?
Falls es ersteres sein sollte, was um alles in der Welt sollen dann Private Incidents sein? Falls es zweiteres ist, wie werden dann so bestehende Meldesysteme wie de:WP:VM etc. erfasst und betrachtet, die ja öffentlich, sprich nicht vertraulich, sind? Grüße vom Sänger ♫(Reden) 12:49, 15 December 2022 (UTC)

Hello @Sänger thank you. We will work with a translator and provide a response to clarify matters including the name of the project. –– STei (WMF) (talk) 12:59, 15 December 2022 (UTC)
Sänger's question refers to the ambiguity of "private": Is this a reporting system for private incidents or is it a incident reporting system that is private (a private system). In German Language you have to decide for one of those options. Der-Wir-Ing ("DWI") talk 16:27, 15 December 2022 (UTC)
(translated by DBarthel (WMF) Wie wir Privatsphäre und Transparenz in Einklang bringen können, ist eines der Dinge, die wir mit diesem Projekt herauszufinden versuchen. Das Meldetool ist Teil eines größeren Projekts (des UCoC) und wird gemäß der Empfehlungen in den Durchsetzungsleitlinien entwickelt. Letztendlich soll es dazu beitragen, dass die Menschen bei der Teilnahme an Wikimediaprojekten sicher sind.
"Privat" bedeutet in diesem Zusammenhang, die Privatsphäre der Community-Mitglieder zu respektieren und ihre Sicherheit zu gewährleisten, wenn es um Belästigung und andere Verstöße gegen den UCoC geht (auf Deutsch macht es wahrscheinlich Sinn, den Begriff "vertraulich" zu verwenden, da es um die sichere Abgabe einer Meldung geht). Der Umgang mit Belästigung und die Verbesserung der Möglichkeiten für die davon Betroffenen ist etwas, das unser gesamtes Movement zu lösen versucht. MAna (WMF) (talk) 15:29, 16 December 2022 (UTC)

@DBarthel (WMF): Du hast es zuerst übersetzt, imho auch sehr seltsam, daher mal ein Ping. Grüße vom Sänger ♫(Reden) 13:18, 15 December 2022 (UTC)

Und für die monolingualen Anglophonen hier noch eine Übersetzung, because I just translated it:
What belongs together? Is it a Reporting System for Private Incidents or is it an Incident Reporting System that's Private?
In case it's the first one: What on earth is a Private Incident? In case it's the second one: How are existing reporting systems like de:WP:VM etc. included and treated, that are not private but open?

MVP

The most valuable phrase for this abbreviation is of course Most Valuable Player, but as someone just linked on the other side, it's probably something else, the Minimum Viable Product. If you use such insider jargon, you definitely have to explain it, otherwise it's just for the exclusion of people from the outside of the small circle that understands that jargon. Grüße vom Sänger ♫(Reden) 18:52, 15 December 2022 (UTC)

Ah, someone with the right permissions should mark the change for translation. Grüße vom Sänger ♫(Reden) 18:55, 15 December 2022 (UTC)
@MAna (WMF): You still have to mark the new version of the page for translation, that doesn't happen automagically. Grüße vom Sänger ♫(Reden) 12:54, 16 December 2022 (UTC)

Staff and volunteers needed to support this?

What are the planned staff and volunteers to support this? Could this be trialled with a button for a small number of editors - report harassment? And will there be a clear rubic on what is harassment? Wakelamp (talk) 09:38, 5 January 2023 (UTC)

This is to acknowledge your questions @Wakelamp. We will provide a response asap. –– STei (WMF) (talk) 13:08, 6 January 2023 (UTC)
@Wakelamp thank you for your patience, please see the FAQ section on the project page for answers to the questions you had. We put the responses in an FAQ because we believe others may have similar questions. Thank you for asking them. Let us know how we did. –– STei (WMF) (talk) 06:04, 20 January 2023 (UTC)
@STei (WMF) your link appears not to work. Stuartyeates (talk) 00:00, 31 January 2023 (UTC)
@Stuartyeates Link now fixed. ("#" instead of "/"). Thanks! Quiddity (WMF) (talk) 00:11, 31 January 2023 (UTC)

Question about potentials for abuse...

So, one question I have is if the development team has thought about, and considered implementing safeguards against, a tool like this being used as a means of harassment itself? When processes like this are divorced from the community context they are intended to protect, they open themselves to abuse by bad actors who can exploit the system. I'm afraid of someone with proper knowledge of social engineering using a system like this to SWAT people they don't like. I truly understand the need to have a confidential system for people to report being harassed and applaud efforts to make it easier for such people to get relief, however, the devil is in the details, and an improperly implemented system without proper safeguards can be used to become a tool of harassment itself. Maybe I'm being a bit alarmist, but I want to make sure that at least the concept is at the forefront of the minds of the people developing this system, that and that they've considered it thoughtfully as they are implementing this. --Jayron32 (talk) 16:51, 2 February 2023 (UTC)

@Jayron32, this is good feedback, and we will keep it in mind. –– STei (WMF) (talk) 09:20, 8 February 2023 (UTC)
@Jayron32 I also wanted to add that, the intention behind designing and building a minimal viable version of the PIRS for a pilot wiki is so that when things go wrong, e.g. bad actors finding a way to misuse the system, the problem is small, contained, and addressable. We are thinking about safeguards we will need, but there is no guarantee that they will be fully effective until we understand what type of misuse we’re dealing with. We want to collect data first, and that will help us figure out safeguards. Our strategy is that the MVP will help us learn and iterate. This means we’ll also have mechanisms in place to help us iterate quickly when misuse occurs. –– STei (WMF) (talk) 17:30, 13 March 2023 (UTC)

2 comments

  1. FAQ shows, that you try to solve a problem, but you have no idea what the problem is (no data, no prediction, no background, no history).
  2. MVP is version of a product with just enough features to be usable by early customers who can then provide feedback for future product development. If for you it is an experiment and opportunity to learn, you will fail (wasting time & money). IOIOI (talk) 21:40, 6 February 2023 (UTC)
    @IOIOI the background of the project is on the main page. The problems we are trying to solve are also mentioned there. There is also some literature review on harassment available. The section on previous work also has some history.
    In the absence of extensive data, as we have mentioned in the scope of the MVP: The way we would like to approach this is to build something small, a minimum viable product, that will help us figure out whether the basic experience we are looking at actually works. Which is what you have referenced.
    Since you think using an MVP approach is a waste of time and money and being bound to fail, what approach do you suggest we should use? What do you see differently? Also you mentioned the absence of prediction, what do you want us to predict? If you would like to share your ideas privately, you can email me. –– STei (WMF) (talk) 11:00, 8 February 2023 (UTC)

Poorly thought out process

"How many people will file reports?" seems not to be a question that a minimum viable product can answer. The amount of reports that are going to be filed likely depends a lot on how a given community directs people to the tool as a destination for filing reports. If you run the tool in a few smaller pilot Wikis you won't learn what usage the tool would have in bigger Wikis.

The focus of the MVP process should be: "Are Wikis willing to make decisions via their community processes to adopt the tool because the think it's helpful for them or is the tool not attractive for Wikis to be adopted?" As Vermont wrote above, decisions to adopt the tool should be opt-in. The making of the MVP should be focused on learning what the tool needs to do to get opt-in and the currently listed questions are a distraction from that. ChristianKl00:16, 8 February 2023 (UTC)

Feedback from itwiki

FYI (it is not possible to notify you because your message was posted without signature). Patafisik (talk) 13:19, 8 February 2023 (UTC)

@Patafisik thank you for the feedback. –– STei (WMF) (talk) 08:00, 9 February 2023 (UTC)

2 questions and a fact

  • Who will be on the other side of the reporting system?

Actually when asking for help anyone can intervene, even a user who takes the reported user as an "example to follow", each voting for each other in conflict of interest, this situation is not fair and must be known in advance who will evaluate reports. Too many are biased to give reason to sysops just because of their name. The experience of sysops in incidents is about causing them and resolving them for their own sake.

  • What can you do after you know how many reports there are?

If you don't speak the language and you don't want to interfere with the local community's decisions, then you just count how many reports there are and that's it. If there are many reports of a user being an abuser, the whole system can be stopped by simply blocking the potential reporters in advance, this will only lower the statistics of reports, not the incidents.

  • The 3 phase system you used as an example for the Italian Wikipedia is completely wrong.

Sysops don't follow the rules, they block directly or jump to the last phase against guidelines, instead to report a sysop a user has to find a mediator but nobody wants to be exposed and it's impossible to ask for help in any page, the only way indicated is to ask to the sysops requests page where only sysops answer, in fact this system gives no possibility to report a sysop even if the help page shows they did an attack, they blame others even when many users have reported the same problem. They use harsh language, mocking and threatening every user they can. The Italian community is biased and ruled by few users who act against the rules. Pinedsa (talk) 16:07, 20 February 2023 (UTC)

@Pinedsa thank you, we have taken note of your concerns. –– STei (WMF) (talk) 13:00, 21 February 2023 (UTC)

日本語でのフィードバック

日本語でのフィードバックのセクションを設けました。非公開で事件を通報するシステムがあるというのはとても助かります。ひとつ問題はこのシステムの日本語訳が「非公開事案通報システム(PIRS)」であることです。FAQを読むとこの「事案」は主に「ハラスメント」、あるいはそれ以上の「事件」に見えます。「事案」という言葉では、普通の「問題」(たとえば、このカテゴリーのこの記事が足りない)というふうに読み取れ、緊張性と重大性が感じ取れません。むしろ「ハラスメント非公開通報窓口」「ハラスメント窓口(非公開)」といった名前の方が適切ではないかと考えます。日本でも、「消火栓」や「非常口」「AED」といった緊急事態の対策用品の設置場には、とにかく目立つように名前が書かれているではありませんか。絵文字をつけても良いくらいです。日本語翻訳関係者の皆様にはご一考くださいますようお願いいたします。 最近日本語ウィキで事件があり、信頼安全チームの方々が対処してくれました。とても感謝しています。

じ‐あん【事案】

〘名〙 当面の問題になっている事柄、また、問題となるべき事柄。案件。 ▷ 東京朝日新聞‐明治三六年(1903)五月一二日 「是れ読者が頃日の西電により知悉する所の事案なり」

-- 精選版日本国語大辞典 (C) SHOGAKUKAN Inc.2006

Kizhiya (talk) 01:59, 22 February 2023 (UTC)

@Kizhiya, thanks for the feedback! –– STei (WMF) (talk) 05:42, 22 February 2023 (UTC)

Feedback on 4 Updates on the Incident Reporting Project – July 27, 2023

Hello community, please check out our latest update and leave your feedback below. –– STei (WMF) (talk) 21:22, 27 July 2023 (UTC)

Hello, thanks for your efforts! The feature of report the topic and comment may not enough to report a user. Normally we would report several comments and topics (and edits), rather than only one or two. Perhaps there will be a feature of report a user which can include several links and evidences? Thanks. SCP-2000 14:22, 28 July 2023 (UTC)
Firstly, want to note that I view the changes as a significant improvement, and reduces several of the core concerns I had with it. From there, I'd back SCP's - some means of simplifying diff addition, up to and including several diffs, would be a benefit over a single comment note. Cheers, Nosebagbear (talk) 16:33, 30 July 2023 (UTC)
Thanks, that's great feedback! We'll keep this in mind. MAna (WMF) (talk) 10:42, 2 August 2023 (UTC)
Why are the users required to add ab mail address or is this only required if they are not logged in? GPSLeo (talk) 13:20, 8 August 2023 (UTC)
Hi GPSLeo, the designs are currently exploring reporting to a private space such as an Admin email address or Trust and Safety team emergency email. Usually the reports that need to go through those spaces are quite sensitive so ensuring the reporter’s privacy and safety is key. This means that the conversation most likely cannot happen on the User Talk page which is public so we think there needs to be an email address provided for the responders who are processing the report in case they need to contact the person who filed the report. However, thanks for bringing this up, we can also explore the possibility for the email to be optional. MAna (WMF) (talk) 15:43, 14 August 2023 (UTC)

Grants:IdeaLab/Centralised harassment reporting and referral service

User:Bluerasberry and I proposed a very similar tool in 2015 and presented on it at Wikimania 2015. See: Grants:IdeaLab/Centralised harassment reporting and referral service. It received dozens of endorsements. The WMF has been sleeping on this issue for too long, and using their liability as a reason not to act, while volunteers have to take on mediation work they simply aren't trained to do. We have been researching this ever since, and organizing our own methods of intake through Wikimedia LGBT+. Hexatekin (talk) 16:33, 8 August 2023 (UTC)

Hi there! I was formerly the designer for Trust and Safety Tools and am switching to Moderator Tools. My colleague Joydeep is taking over for me! I just wanted to quickly respond to let you know that our team did indeed look at your & Blueraspberry's proposal from 2015 when we were doing exploratory research and getting started with the Incident Reporting System. I was also excited to see how much support your idea had/has. I liked how you defined the problem, and I think we're fully aligned on that. The solution you proposed is pretty close to what we're building! There will be a set of reasonable entry points, users will be able to file a complaint/report, and that report will be routed to the correct entity based on the incident. We worked closely with WMF's Trust and Safety department when designing the service to ensure it meets the criteria they're looking for as well. As a side note, I know it's frustrating that the project has taken this long, but I'm excited about the progress we've made so far and I hope you'll share your feedback & thoughts on it as we move forward! Thank you! AVardhana (WMF) (talk) 20:34, 7 September 2023 (UTC)

No word limit

Communication tools like a Incident Reporting System should be designed to facilitate communication and not to limit communication. https://meta.wikimedia.org/wiki/File:Mobile_IRS_Step_2_(Form).png suggests a 50 word limit for users to explain a violation. I don't think that there's any good reason to limit a user to a certain word limit when trying to explain a problem. Users should also be able to add multiple links in a single report (at the moment the UI looks like it only provides space for one link.

Otherwise, I think the way this feature developed is good. ChristianKl12:59, 9 August 2023 (UTC)

Hi @ChristianKl! Thanks so much for reviewing what we've shared and voicing your thoughts. We really need all the constructive community input we can get!
>> Re: the word count. We recently decided to increase 50 to 200, after talking with the Trust and Safety specialists at the Foundation. I hear you on the importance of allowing someone to explain a problem and we're assuming that links of evidence and 200 words will suffice both for the user reporting and the responder reviewing the report. We've received feedback from responders (i.e. Admins and T&S staff) that excessively lengthy reports are counterproductive so we want to encourage reporters to be as succinct as possible. 200 words is half a page so let's see if that's enough! We can always increase the word count if need be.
>> Yes, users will be able to add multiple links to a single report. Glad you noticed this detail! We were just keeping the UI super simple with this first iteration but will incorporate multiple links for sure. AVardhana (WMF) (talk) 17:42, 16 August 2023 (UTC)
Do links count as a single word? If so, then 200 words should be okay for the indicated use case of the IRS. Nosebagbear (talk) 01:28, 19 August 2023 (UTC)
(I'm aware that, sadly, Nosebagbear has passed away but wanted to reply in case it's helpful information for others) Links will be counted separately from words so whoever is filing a report will have 200 words to explain the context if they want to + links to evidence of the incident. AVardhana (WMF) (talk) 20:10, 7 September 2023 (UTC)
Just to note that the word count limit has been changed to a code point limit. This is usually similar enough to a character limit, but code points better handle internationalisation. For the MTP, this limit is 500 code points. WBrown (WMF) (talk) 17:30, 6 November 2023 (UTC)