User:AlecMeta/Next Steps on the Image Filter
Considering discussion of Image filter referendum/Next steps/en.
We want to reconcile to things that are very hard to reconcile:
- A volunteer, editor, donor communities that oppose censorship in any form.
- Readers who want to be able to decide for themselves whether to view an image reasonably could be expected to upset them.
This is a big challenge.
What did we learn from the poll
[edit]- Lots of people really do believe we should have such a feature-- not just one or two rogue prudes at the top of an organization.
- Culture-neutral is indeed very important to us. About 70% of respondents rated it as important, a full 41% rated it a 10.
What did we learn from the discussion
[edit]- Our most active active editors are a distinctly non-representative sample of the larger readership. This shouldn't be surprising, it's by definition.
- A lot of people feel very strongly that our organization shouldn't be involved in this, shouldn't promote it, and should be actively working to subvert censorship, not enable it. This is a real and valid point of view that may be extremely widespread among our editors.
- Should we encourage or enable self-censorship / filter bubbles?
- Could our filter be misused for the purposes of censorship?
- No consensus yet exists on what process could effectively categorize which images should be filtered.
- No consensus yet exists on how to fairly pick filter-categories, how many categories there should be, etc.
- Among English discussion, I would have to say the majority if not the consensus opposed such creation.
- Categories and Labels are fundamentally different objects. Simply using our existing categories directly would dramatically alter the dynamics of image categorization. Objective categories may be used to seed subjective Label-Categories, but curating Labels is very different from curating Categories.
Filter Options
[edit]No Such Feature - Status Quo
[edit]The current situation is merely suboptimal, not catastrophic. If a filter would significantly hurt our ability to recruit editors and donors, then status quo might remain the best option.
Simplest features: "hide all images on this page/site"
[edit]- Pro: Cheap, easy, non-controversial, culturally neutral, and a stable equilibrium.
- Con: It would 'over-filter' beyond what our readers want.
I don't think this is controversial, I don't think it'd be hard to do, and I don't think it would be a bad solution.
Simple features: Any flagged image shuttered
[edit]- All images start non-flagged. Any image that is flagged by anyone is added to the category "objectionable', no justification required except good-faith sincerity.
- Pro: Easy, culturally neutral, stable equilibrium
- Con: Most users of the feature will find it over-filters beyond what they want, approaching shuttering all images.
This might not be worth the trouble for the advantage it provides over "shutter ALL images". But I just want to point out there is a set of images that can be collaboratively created, culturally neutral, objective, and simple. It's a defensible position that's closer to a working feature than the status quo, but not as controversial as a more complex system.
Moderately Complex feature: Shutter "objectionable" images, as defined by frequency of objection
[edit]- Images can be flagged as 'objectionable'-- images that are flagged above a certain rate get momentarily hidden for users who want that. Users could customize how strict they want that rate to be, all the way up to filtering any image anyone that has ever objected to.
- Pro: Relatively simple, mostly-culturally neutral, stable equilibrium.
- Con: "Tyranny of the Majority"-- large populations in "voting blocs" will flag (in a bigoted fashion). They will thus influence other users' reading experiences, even if it's only through the prejudice of shuttering. Aiding collaborative filter development is very controversial for many editors.
Per-user blacklists/whitelists
[edit]- Each user can hide images on their own in a non-collaborative fashion.
- Neutral, stable, defensible, and very close to the status quo.
- No user can influence another user's experience-- very important to some people.
- Con: User will see images they don't like and have to click to hide it.
Other
[edit]- Make an API / 3rd party collaboration: Work to enable all interested third parties them create a filtered version of our projects.
Go Big: Create a collaborative content tagging & labelling system-- solve 'The Quality Problem'
[edit]- We do well with projects involving verifiable facts.
- We haven't yet mastered collaborating on subjective standards of quality.
- "The Quality Problem" arises when quality stops being objective and starts being subjective.
- So our featured articles can have impeccable spelling, since everyone can agree spelling corrections improve the quality of the article.
- In contrast, our articles are less likely to have "entertaining authorial voice" or "be a real page-turner"-- because those are very subjective qualities-- we can't agree on which changes are improvements.
- To improve quality, we must embrace subjective tagging of content.
- Our current content quality tagging is very primitive. Yes, I can find "featured content" likely to be chock full of citations.
- But what if I want to find content that is "Insightful" or "Upbeat" or "Highly Informative".
- When can I do a search for content that is "humorous, insightful, & upbeat"?
- The "Offensive Images" tagging is just one instance of a whole array of issues we'll face as we evolve.
- Right now, others want to identify content that other people with their values have found "Offensive".
- Right now, I want to identify content that other people with my values have found "Mind-blowing".
- These are the same technologies.
- Spending donation on censorship seems to be especially controversial.
- It's not as controversial to develop something we already need-- highly collaborative, culture-neutral, highly-subject content label.
Controversial images are 'not a big deal'. But "Collaborative Subjective Measures of Quality" is a big deal.
If we really have to enter this space race, then let's at least shoot for the moon. Don't just focus on measuring "offensiveness"-- dream bigger, and imagine subjective tags of all variety.
Beating my latest dead horse-- "Wikimedia is not the Reference Section"
[edit]- Bioinformatics projects need our help to document genes and biomolecules that will drastically alleviate suffering. That's a good cause, and we know how to help them.
- OpenScience is an exciting new movement, taking science out of the journals and onto the free web. That's a good cause, and we are in a unique position to help connect professional scientists with amateurs with special skills.
- Fancruft, incredibly details facts about fictional worlds, allow people from very different backgrounds to come together and share something they love. It's not essential content, but it's harmless, its editors are valued, and it deserves its own home at Wikimedia.
- Genealogists like to put together the jigsaw puzzle of their ancestry. I admit, to me it's as exciting as "retirement home bingo", but amateur genealogists need us, we can help them, and I bet they'll "pay their own way" in terms of donations and edits.
- Bioinformatics content, Openscience project, Fancruft content and commmunities, even Genealogical data all have one thing in common:
- All that content is far more 'in mission scope' and far less 'controversial' than Image Filtration".
In short, if we build something to actively impede information-sharing, I think we should recognize that it means we have reached a state where Wikimedia has some latitude to expand, that we are destined to be "more than 2011's Wikipedia, much much more". It will become very hard to turn away good-faith information-sharing projects if we are actively creating censor-related features and welcoming its developers with open arms.
Process
[edit]- Identify the project-communities/language-communities that want this feature the most, and see if the community's active editors reflect the trend towards wanting the future.
- Are there any projects that could demonstrate consensus to implement one of the proposed options on a trial basis? If so, let's use them as a pilot and see if it works.
- As a general rule, baby-stepping features, project by project, is a definite way to ease controversy. The projects that need it most will adopt it, the projects that need it least will take a "Wait and See" attitude, and if it doesn't blow up, they'll be more inclined to adopt it.