Wikimedia Foundation/Legal/Community Resilience and Sustainability/Conversation Hour 2025 01 30
You are invited to the quarterly Conversation hour led by Maggie Dennis, Vice President of Community Resilience and Sustainability, on 30 January 2025 at 20:00 UTC.
Jacob Rogers, Associate General Counsel for the Wikimedia Foundation, will be present to discuss supporting and defending community self-governance, in light of the increasing challenges around the world that threaten this core value.
Maggie and others from the team will also be available to discuss Trust and Safety, the Universal Code of Conduct, Committee Support, and Human Rights.
This conversation will happen on Zoom. If you are a Wikimedian in good standing (not Foundation or community banned), write to let us know you will be attending the conversation and share your questions at answers@wikimedia.org at least one hour before the conversation. Please place “CR&S” in the subject line. Someone will follow up with the Zoom details.
If you do choose to participate in this conversation, Maggie would like to bring some expectations to your attention:
I can't and won't discuss specific Trust and Safety cases. Instead, I can discuss Trust and Safety protocols and practices and approaches as well as some of the mistakes we've made, some of the things I'm proud of, and some of the things we're hoping to do.
I will not respond to comments or questions that are disrespectful to me, to my colleagues, or to anyone in our communities. I can talk civilly about our work even if you disagree with me or I disagree with you. I won't compromise on this.
You may view the conversation on YouTube and submit questions live in Zoom and on YouTube.
The recording, notes, and answers to questions not answered live will be available after the conversation ends, usually in one week. Notes from previous conversations can be found on Meta-wiki.
Notes
[edit]People with a lot of money and time can target a lot of abuse against editors. What has the Foundation done to protect them? These types of threats create a chilling effect on content and drive people away.
- Jacob Rogers (Associate General Counsel for the Wikimedia Foundation): This question is very broad, and I think editor safety or editor protection is something that goes in many layers. The first layer is basic digital security that editors practice. If you’re an editor working on something, people see your username or, if you’re not logged in, your IP. We’re working on switching from IP addresses to temporary accounts. Whatever they see, if they google that, what will they find? Will it lead to your real identity or social media or workplaces or is it relatively separate and kept just to Wikipedia? A number of Wikipedia projects have options for separate accounts (allowed sock puppets) that can be used for high risk areas and register with admins on that project. That kind of [personal] digital security practice is the first thing. If people can’t connect you to anything else, it cuts off a lot of harassment problems before they become problems. Secondarily, the Foundation collects very little user data about users. If they come to the Foundation with legally backed demands for data, there really just isn’t much to share. Even hackers wouldn’t find much. You can read more about this in the privacy policy and data retention guidelines. IP addresses are kept for 90 days and then deleted or anonymized. The only long-term data we retain is the email address you register with your account [note: optionally, you can register without an email if you don’t want to use it for password recovery]. If you have an email address that doesn’t otherwise lead to your identity, you have protection even if we are required to disclose it. Finally, we have assistance available for editors or functionaries who are harassed for good faith edits; we may be able to help you find a lawyer or pay for that.
- Maggie Dennis (Vice President of Community Resilience and Sustainability): Disinformation is something we started thinking about how we can support as an institutional process several years back. One of the things we can help do when the community groups are in charge of monitoring the content and behaviors, they can reach out when they have concerns. We have staff who have training and time to look into patterns of edits and we can identify when there is a disinformation campaign, which we see a lot when there are people with money and time to go after specific editor groups. We try to help with maintaining some of the info necessary for self-governance to work. We provide feedback when we see a pattern emerging.
Will the annual review of the UCoC result in any changes to the original policy?
- Maggie: I don’t recall how old the Universal Code of Conduct is. We waited until the UCoC was fully functional so everyone could see how everything was working. I couldn’t imagine us asking members of the community for feedback and the policy not being able to change.
- Barkeep49: I’m a member of the U4C, which is responsible for overseeing an annual review of the UCoC. This is the first time for the annual review since the UCoC was passed in 2022. There are a lot of comments out there. The Board has historically said they control the UCoC policy so it is less clear how changes might go from the community to the Board and then be approved. The UCoC review might have to move on a different timeline than that of the Enforcement Guidelines and Charter since there are so many comments. We will continue to work with the Board and the Foundation on the parts that we as the U4C do not control.
What is the Foundation doing to respond to ongoing geopolitical situations? Should the Foundation move legal jurisdictions?
- Jacob: Two different questions there: general response and question of jurisdictions. I’ll reverse the response, because the jurisdiction question is clear. The answer is no, it probably wouldn’t do much because our content would still be available in the US and thus still subject to US jurisdiction, and there are international treaties that would apply. My off the cuff guess is that a move like that would also be very expensive. I could only imagine us doing that if it was the only possible solution and nothing else made sense. However, in terms of what we are doing to respond there are geopolitical situations happening all around the world at the moment: (1) the changing situation in the US with a bunch of executive orders being implemented on a day-to-day basis; (2) major new European regulations including the DSA, which is in its second year; (3) the UK is working on its Online Safety Act which is coming up for a vote at parliament; (4) there are laws being proposed in various other countries around the world such as Brasil and India. There are many different geopolitical situations happening. We have strategic and logistical elements to our response. Strategically we monitor how these apply to the Foundation and users. There has been a shift in how legal jurisdiction is thought of in terms of websites since 2016. It might have started with the GDPR, but other countries have been more recently following that model. Laws are extending jurisdiction to website hosts and content providers that are not located in the country or that don’t have strong targeting for the country. They are expansive, jurisdictionally. Those laws make it possible for courts in those countries to bring cases or for law enforcement to make demands under those laws, where they previously did not have jurisdiction. The sum total of these acts across the world is changing the web legislative environment completely, including for the Foundation. We watch these laws, we have outside counsel, we have policy folks, we talk to different chapters and user groups around the world; we talk to users in different countries and others and are doing our best to keep our strategy updated with the changing requirements and obligations. That is an overall vague answer, but that is the idea there. On an operational level - Foundation internally - we are focusing more of our work around these types of issues, with internal meetings thinking about some of these changes in laws, cross-functional teams briefing each other and all the corporate collaboration of sharing information and trying to make sure that our work is thinking about the changing situation as it is happening. Obviously, it is moving very quickly in some places, especially the current US, so there is a little bit of just watching and seeing where things settle out and doing our best to stay as informed as possible and figuring out what we need to do as things settle down.
I recently heard that the Foundation received an order to hand over editor information in a court case in India. What happened and why would people want to contribute if their information is not kept safe?
- Jacob: This question is about a specific case, and I can’t go into a lot of detail about the case. We did make a public statement about this on-wiki. The brief specific answer is that we provided data to the court, but not to the opposing party. The broader issue that I think this raises, similar to what I was just talking about, is around different laws having broader jurisdiction. The way I think about this is that the Wikimedia Foundation can do a lot to protect editors, again starting at the base level of not having much data in the first place so that there’s not much for people to get. If they do demand it, we can protect users to the greatest extent legally possible, and we can work with users in some cases to get their own lawyers to help them. That can be more effective than what the Foundation can do. There are many cases in many jurisdictions where the person whose data is being affected has more rights than the Foundation does to oppose data disclosure. If we can let that person know and, when we can help that person get legal support, that might better protect them and reduce the data disclosed. That said, almost all of these sorts of data claims relate to content on Wikipedia. Somebody says the content is false or poorly sourced, or written in an insulting manner, or steals someone else’s property - the better the complaint, the harder it is to defend an editor. If you go on Wikipedia and write a bunch of unsourced lies, the Foundation and you might both fail to defend against that claim. If you write carefully with good quality sources and ensure what you write reflects those sources, we have a much better chance to defend that, as do you, so that nothing is disclosed. That quality of work issue is, I think, particularly important and even more so in the face of these changing jurisdictional requirements. A lot of countries around the world claim jurisdiction over where the article subject is located, a point we addressed in 2023 in our Terms of Use update as a warning for folks. We’re seeing more and more courts take a case based on the location of an article subject even if the editor isn’t located there. That can create challenges in defense. The quality of the work, especially on biographies of living persons articles, is important.
I have been wanting to ask for a while. What is the story with Maggie’s cow?
- Maggie: I like cows and when you are a mother, you get a lot of things related to that thing. She was a gift from my darling daughter-in-law. I love her energy. She makes me feel peaceful and relaxed. She does not have a name but is just my cow.
I was talking with some other WikiConference North America organizers today. We recognize that the current US visa restrictions provide an undue and perhaps insurmountable obstacle for many from other countries in our area, and that Canada may not be much better. WCNA 2025 will be in NYC this year, but some wish to think about future years under the current US administration. Can you guide us as to a source for learning what visa requirements that other countries and dependencies in our area have? This would be useful in considerations for future WCNA conferences.
- Note: 1. Folks in the YouTube chat would like to see more remote participation options and more leadership participation involved in planning regarding location, accessibility, privacy and remote participation. 2. While we aren’t recommending this site per se, there is good information regarding international travel and the new administration in the United States.
- Jacob: The visa question is not something I am familiar with. This might be something to contact the Foundation staff connected with the organization of conferences.
- Risker: Visa regulations change so fast among countries and it’s difficult to plan. Canadian regulations change several times per year, and we can anticipate changes in the US over the next few years. What is required today is not necessarily what will be required any time from now. We can just hope for a decent snapshot and it’s really hard for anyone to plan longterm. Even at the best of times, the Foundation doesn’t have control over the governmental organizations to have an impact on these visas. We have encountered people who have not been approved because of the prescription medication they are taking. There may be independent travel organizations which could provide more detailed information.
- Nasma Ahmed (Senior Manager of Trust and Safety): We do have staff at the Foundation who do this type of assessment for grant applications and scholarships. The challenge is planning in advance and the situation can change pretty quickly. Something we might need to consider is a level of adaptability given the shifting geopolitical environment.
- Response from Cornelius, who works on conference support at the Wikimedia Foundation: The best source for learning about visa requirements of countries and dependencies is certainly Wikipedia. There is also a website called Sherpa.com where you can check on individual travel requirements (stating your nationality and your itinerary).
I noticed the reporting process for affiliates is changing. The new format is difficult for smaller affiliates. What is the purpose of asking for all this data?
- Maggie: I am sorry to hear that it is difficult for smaller affiliates and I hope we come up with processes that are easier for smaller affiliates. My understanding is the Affiliations Committee and the Board worked together to discuss what was going well, what was not going well and what was needed to support their system. We want to make sure our affiliates are healthy and supporting a healthy movement.
- Mike Peel (Wikimedia Foundation Board of Trustees member): The idea is to streamline what we need and affiliates just submit the answers once. We want to spot problems before they become larger problems and support affiliates in a constructive way.
- DerHexer: Example using the new process from the Wikimedia Stewards User Group. Personally, the new form felt much shorter and the need to write additional content to fully capture our work. Many questions were easy to complete, but the question about reporting all of the meetings might be cumbersome for larger affiliates. I didn’t feel like I had to write more in the lower sections, but I did because I wanted to add more detail.
I have a conflict of interest with one article on English Wikipedia. I work on biographies of living persons, too. The subject of the biography where I have a conflict told me about a serious error and asked what to do. I told them all the “right” channels - VRT, the noticeboards. They were met with hostility and suspicion and nobody helped. I can’t even tell anybody about the problem without outing myself, which I shouldn’t have to do. I don’t touch this conflict area, so I shouldn’t have to ‘out’ myself. What am I supposed to say when this person says that all they have left to do is sue?
- Jacob: This is a question where we don’t know all the information to figure out the best solution. The ultimate goal in this situation is to get the information about the error to the appropriate community so it is understandable based upon the policy of that community. The problem a lot of people as article subjects run into is they violate conflict of interest or other policies and even though subjects are right in the content of what they are saying, they may not be getting the help they want and they can come across as extremely angry and threatening and sometimes having difficulty explaining their situation. So the question is even if they are right, they are having difficulty explaining their situation, so how do we get that right information to the communities? How do we help them? There could be several ways. Help them to update their language in a follow-up email to VRT so they might explain themselves better without legal threats and send the message to the appropriate community. If they are considering a lawsuit, I recommend they write to legal@ where we can help with that same service - to update the language and provide the community with something they can work on. If you look at the total number of legal demands listed on the Transparency Report compared to the number that turn into lawsuits for the Foundation, there are a couple thousand demands each year and only a handful of lawsuits. Helping people get their problem solved though the community is the majority solution. There are wrinkles, and this question might not have all of the information. Sometimes secondary sources are incorrect and the need is to get the original reporters to issue corrections as otherwise Wikipedia will continue to be wrong because they rely on the secondary sources. There are multiple different routes. The need is to take the initial request from the state it is in and be able to understand the specifics and articulate it in a way to show what is wrong, what is missing with the sourcing or writing or policies, and pass that information along. The need is to only do a lawsuit if all of that fails and there is still a major point of contention. But I feel like it takes a lot of work before a lawsuit really makes sense for that.
Recently, the English Wikipedia has passed a policy to disallow AI-generated portraits of living people. What is the stance of the Wikimedia Foundation on legal issues that could arise from the intersection of AI content and biographies of living people?
- https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(policy)/Archive_200#BLPs
- Jacob: There are two different aspects of this. One is the status of the AI work under copyright law. There are several different areas of guidance. Roughly speaking, the copyright office in the US issued guidance on this and their stance is that AI created outputs are likely not copyrightable and if someone is just prompting the AI that probably does not provide enough human input to make it copyrightable. An AI generated image likely does not violate copyright law or create a situation that would make the Foundation do a DMCA takedown if someone complained about it. But when you get to images of living people, there could be non-copyright rights that someone could have like a right of personality and their own image, which would vary in for-profit or non-profit context which could create a legal problem or “this-is-legal-but-not-in-line-with-Wikipedia-policies” situation, like it might be reusable for non-commercial purposes, but not reusable for commercial purposes. This is an unanswered part of law right now. There have been proposals in both the US and Europe for laws around the right to one’s own image and one’s own voice in the AI context. As far as I am aware, those laws have not been fully passed. The EU AI Act has risk levels on this and if there is an AI generating a lot of high-risk material material, that AI would be subject to restrictions and safeguards, but not necessarily each individual image. There are potentially a lot of Wikipedia policies questions here. Going back to the previous question, what do you do if someone is just upset? This is a potential situation that could occur. The ideal route would be to encourage them to go about providing a CC licensed image of themselves [to replace the AI image they dislike], which makes it instead of a conflict, about improving the article. The bigger questions around, “Is any AI content safe, because maybe it’s violating copyright in its training data?” That is probably low risk in regard to creating images of people, but more of a risk if there are identical written work like song lyrics produced matching some of its training data. I wouldn’t say it’s zero risk, but I wouldn’t necessarily say that people require zero risk as there may be some types of AI that are ok to use and if the Foundation does get a DMCA based on that image, we would take that image down but not all AI images. If we received a lot of DMCAs based on AI images, that would be something to consider that might not be working. I think there are a bunch of uncertainties across multiple jurisdictions and what kind of problems it might generate for Wikipedia. Communities should be thoughtful about how AI generated images work for the policies for the particular wikis. For example if you’re trying to get an image uploaded for someone who is now dead, there is not a way to get a CC licensed image from them. There is a likelihood you could come up with an AI image that more or less fits the English language non-free content policy criteria [in the situation where it’s impossible to get a CC image]. Perhaps other communities without such an expansive non-free content policy might not want that same thing.
Is there a protocol if a volunteer is harassing (stalking, bashing, telling lies) about someone who is employed by a Wikimedia affiliate? The affiliate is taking some steps but I wonder if there is some way to block this person on the global level.
- Maggie: I oversee the team that does Trust and Safety. We are here to protect the community. Affiliate staff are community. My staff are community. Volunteers are community. We are all one movement and that kind of behavior is not okay. It’s not just about protecting the individual but protecting other people because that kind of behavior is not limited. I am also going to admit, there are always challenges when it comes down to evidence, particularly things that happen offline. It’s easier to deal with things we can trace, but there are plenty of ways we can work especially with affiliates to understand what their investigations have shown and see how that applies. The Universal Code of Conduct applies to all spaces and we are here to help if a global ban is appropriate. It’s not about whether or not it was an affiliate staff member, but it’s about the behavior and the safety of the community. This sort of thing hits close to my heart.
- Jacob: I will supplement Maggie a little on this and probably with a couple of caveats. Caveat one, we can globally ban someone because they are harassing people. It doesn’t have to be on-wiki harassment. There are potential evidentiary problems [related to off-wiki conduct], and there is a limit to what we can do. We can globally ban someone, which means they are banned from websites we host and from attending conferences we host, but it doesn’t mean we can kick them out of spaces that affiliates set up so the affiliate does need to do certain things to keep their own spaces secure. Trust and Safety liaises with affiliates about these issues, but the Foundation cannot single-handedly keep people out of spaces around the world. [Caveat two] Affiliates that are incorporated may have special limitations when it comes to their own staff. They may not be able to or want to have the Foundation help them in the same ways, based upon the laws of their country and that has to do with employment laws and obligations.
I wanted to apply to be an affiliate and then I saw there's a new lengthy process in place, including a live interview. What is this meant to help with?
- Maggie: I oversee not only Trust and Safety but Committee Support. The goal is to help AffCom to move as quickly as possible. Volunteers should be using their brains and not doing busy work. Staff can do that and volunteers should be focused on figuring out how to move forward with things. Some of the benefits to having conversations with potential affiliates instead of relying on paperwork, is you can get clarification on questions and answers. This is especially important when you have people in multiple languages, most people communicating in English which is often not their native language. So how can we use the opportunities to best get the answers necessary for AffCom to understand the challenges the affiliate is facing or what they are hoping to achieve. It’s also meant to use the opportunity to adapt and understand what support they need and how they can more collaboratively work with the group if they are having challenges to reach the stage where they become a recognized affiliate.
- Mike: I am an AffCom liaison for the Board of Trustees and what Maggie just said is the main reason for it.
Can Maggie upload the cow painting to Commons?
- Maggie: I don’t know who created it and I have no information about who created it. I think we’re going to have to stick with de minimis. But if you happen to be the copyright owner of the cow and you wish to release it, I am happy to share the process with you.