User talk:Trystan
Add topicWelcome to Meta!
[edit]Hello Trystan, and welcome to the Wikimedia Meta-Wiki! This website is for coordinating and discussing all Wikimedia projects. You may find it useful to read our policy page. If you are interested in doing translations, visit Meta:Babylon. You can also leave a note on Meta:Babel or Wikimedia Forum (please read the instructions at the top of the page before posting there). Happy editing!
-- 17:29, 21 August 2011 (UTC)
Thanks for your helpful comments
[edit]I suggested to Alec that you and he might work on a page describing options for moving forward, based on a comment of yours from a week back. –SJ talk | translate 00:14, 6 September 2011 (UTC)
- I'd be happy to; thanks for setting one up.---Trystan 13:16, 6 September 2011 (UTC)
Feedback
[edit]I agreed with so much of what you wrote! I can't quote it all, but let me quote a bit of the parts that really resonated with me.
Seven principles for implementation
[edit]- We acknowledge that warning labels prejudice users against certain classes of content, and are therefore an infringement on intellectual freedom.[1]
- Yes! I think it's very good to acknowledge that there are HUGE dangerous and everyone needs to start off knowing that.
- I'd add that this is only the case when the warning comes from an authority-- especially a NPOV project like Wikipedia. But warnings in general can actually be great recommendations-- some of the best books I've ever read were "recommended" to me by people I disagreed with-- who warned me not to read THAT book or film. --AlecMeta 03:52, 10 September 2011 (UTC)
- In a few extreme cases, this infringement on intellectual freedom is justified in order to give users control over what images they see, where an objectively definable set of images can be shown to cause significant distress for many users.
- Yes! This is my thinking as well.
- "objectively-definable" is an interesting term here. Neutrality requires an objective definition. But what's being measured is just an emotion-- subjectivity defined.
- We acknowledge that any system of warning labels will be inherently non-neutral and arbitrary, reflecting majority values while being over-inclusive and under-inclusive for others, as individuals have widely different expectations as to which, if any, groups of images should be filtered, and what images would fall within each group.
- Yes! The worst possible think would be for us to try to say "Labels/Tags" are fair, just, authoritative, or accurate. We can do something non-NPOV-- but we MUST be upfront that we're doing something arbitrary, subjective, biased, flawed, and 'without authority'.
- Categories are not warning labels.
- This is one of the really really obvious points that came out of the discussion. The label system needs to be completely detached from the categories system.
Recent thoughts I've been having are that what we're asking people to do is catalog their emotional reactions to content-- and that is something we need to be doing for LOTS of emotions, not just 'offense'. I actually would like to make a label like "Label:AlecMeta/Images I find awe-inspiring]].
Labels like that would be a trove of data to work from. someone will make a list of shock images, which I can use to either seek OUT shock images or ask to shutter them. And so then we have collaborative, culture neutral, subjective labeling. As long as this isn't hooked up to a filter, this is probably a really good thing-- MOST subjective labels won't be about offensiveness, they'll be about "Awesomeness" or "Mindblowingness".
Here's where I get stuck and need your help in this theoretical model. How can I take all that subjective data of how content affects emotions and turn it into a OBJECTIVELY-made yes-no decision about whether to shutter an image?? I can hack together a few cruddy solutions, but it needs work.
We have to start with completely-subjective data, process it, and produce objectively-justifiable binary decisions. That's why this problem is hard, I think-- subjectivity isn't our specialty.
Keep up the good work-- we're thinking along basically similar philosophical lines, I think. --AlecMeta 03:52, 10 September 2011 (UTC)
- If I recall correctly, you've already mentioned the tool that I would suggest earlier in the discussion: tagging of images by individual by individual users. I was thinking that a "tagging-based filter" should be added to the Next Steps page alongside the label and category-based options. In a tagging system, any user can apply any short string of text to any image, collectively generating a folksonomy. So you could apply the label "Awesome" to an image, and if a few others do the same, it would be returned with a high rank when someone searches for that tag.
- The social bookmarking site Delicious uses this system, so you can search, for example, for websites that have been popularly tagged as being Awesome. (The second link definitely is!)
- The difficulty, as you say, with filtering would be setting the threshold. Possibly, this control could be given to the user. For example, there could be an interface where the user selects a tag that they want to filter (e.g. "nudity), and then sets whether they want:
- an absolute threshold (i.e. Have 1/5/10/50 people tagged this with "nudity"?), or
- a percentage threshold (i.e. Of the people who have applied any tags at all to this picture, have 1%/5%/10%/50% tagged it with "nudity"?)
- I'm not sure how tagging would work in a multi-lingual environment, as users are free to generate their own tags without restriction, so equating translations or near translations would require some novel approach.
- This system would be closest to the open-list proposals, and would share their strengths and weaknesses. It is highly flexible, but the major concern is that users will come up with tags that others would find offensive.--Trystan 14:37, 10 September 2011 (UTC)
- I was thinking that a "tagging-based filter" should be added to the Next Steps page alongside the label and category-based options. In a tagging system, any user can apply any short string of text to any image, collectively generating a folksonomy
- So, what makes a "open-list label" system different than a tagging system? I hadn't added cause I figured labels were the same as tags.
- I was thinking that a "tagging-based filter" should be added to the Next Steps page alongside the label and category-based options. In a tagging system, any user can apply any short string of text to any image, collectively generating a folksonomy
- I'm not sure how tagging would work in a multi-lingual environment, as users are free to generate their own tags without restriction, so equating translations or near translations would require some novel approach.
- If the tagging system was something we really did "big", there are automated statistical tests that could pretty quickly figure out which tags are similar. Indeed, the system could pretty much figure out how similar other users are to each other, and use inference to figure out what you'll like/hate. People who love the same content could meet up to collaboratively work on content they both have a passion for.
- And then the data would be AMAZING for social science. If lots of users massively tagged, it would show us so much about our readers, and human culture in general. Imagine being able to actually see, in fact not speculation, how humor in one culture is different from humor in another, and knowing that the answer was data-driven and valid, not just stereotypes. The same goes for every other emotion.
- I'm not sure how tagging would work in a multi-lingual environment, as users are free to generate their own tags without restriction, so equating translations or near translations would require some novel approach.
- A Grand Compromise
- It looks like about 10% of the respondents has a "hell no" reaction to the filter ide-- maybe as high as 15%, maybe lower like 8%. It'd be great if those 10% were people who had never edited our projects, but I think we're going to find that "Hell-No-10%" are probably some of our most active editors and potentially even active donors.
- A certain percentage doesn't want this, never ever ever. If we build it, they'll be sad, and might tend to leave us.
- But we're Wikimedia. We are amazingly powerful, we have one-of-a-kind only-one-in-the-wide-world collections of talent and potential, and we're not doing bad on resources either. We don't have to sit down at the bargaining table with the "Hell No"-editors empty handed and inflexible. On the contrary, we have a lot of room to do new and interesting things-- new amazing things is what we are, even when they're imperfect.
- Basically-- what could we give the "Hell-No-10%" that would make them feel more comfortable about the intellectual freedom of Wikimedia? What new features or other new things would they like?
- My thinking is that if we can tie "shuttering" to something high-pro-intellectual freedom, it will send the right message-- we're expanding our freedom, so we need to let people do shuttering given all the new freedom we're having.
- For me, "new freedom" would look like us starting lots of new projects about all types of information sharing-- but I'm not in the "Hell No-10%", so it's not my opinion we need.
- A Grand Compromise
- But you get the idea-- if we could have a "massive explosion of authorial freedom", the shuttering would be a tiny fly of censorship washed out by a nuclear explosion of increased intellectual freedom. The best time to hand out ear plugs is when you're about to crank up the volume-- just handing them out in the middle of a ongoing performance really upsets the musicians. --AlecMeta 01:42, 12 September 2011 (UTC)
- In a labeling system as it has been discussed so far, every article gets a single set of labels agreed upon by editors. Much like the category system, both the labels themselves and their application to the image needs to be done by consensus.
- In a tagging system, each user apples his or her own choice of tags to an image. There is no single set of tags for an image, but instead, one set of tags per image per user. So I could use free-text terms to describe an image, you could do the same, and 999 other people could do the same. Out of these 1001 sets of descriptions, trends start to emerge (perhaps 800 of use used the term "clouds", 500 used the term "bicycle", and 200 used the term "balloon".)--Trystan 16:33, 1 October 2011 (UTC)
Queering Wikipedia 2023 conference
[edit]Wikimedia LGBT+ User Group and the organizing team of Queering Wikipedia is delivering the Queering Wikipedia 2023 Conference for LGBT+ Wikimedians and allies, as a hybrid, bilingual and trans-local event. It is online on 12, 14 and 17 May, the International Day Against Homophobia, Biphobia and Transphobia #IDAHOBIT, with offline events at around 10 locations on 5 continents in the 5-day span as QW2023 Nodes.
The online program is delivered as a series of keynotes, panels, presentations, workshops, lightning talks and creative interventions, starting on Friday noon (UTC) with the first keynote of Dr Nishant Shah entitled: I spy, with my little AI — Wikiway as a means to disrupt the ‘dirty queer’ impulses of emergent AI platforms. Second keynote is at Sunday’s closure by Esra’a Al Shafei, Wikimedia Foundation’s Board of Trustees vice chair, entitled: Digital Public Spaces for Queer Communities.
If you have been an active Wikimedian or enthusiast, supporting LGBT+ activities or if you identify as part of the larger LGBT+ community and allies in Wikimedia, please join us in advancing this thematic work. We encourage you to join online or in person with fellow Wikimedians if it is easy and safe to do so. Our working languages are English and Spanish, with possible local language support at sites of Nodes.
Registration for the online event is free and is open until Wednesday May 10th at 18:00 UTC, for safety protocol. Late event registration approval and event access denial is at the discretion of organizers.
More information, and registration details, may be found on Meta at QW2023
Thanks, from Wikimedia LGBT+ User Group via MediaWiki message delivery (talk) 22:56, 9 May 2023 (UTC)