Jump to content

User:Pundit/Image advisory label proposal

From Meta, a Wikimedia project coordination wiki


Most policy proposals start as theoretical concepts. This one doesn’t. It is based on a personal experience. I am going to describe it to you, and I hope that by relating to it you will be able to better understand why this proposal may make sense.


Personal rationale

[edit]

I am a scholar. I have delivered over 40 papers at different conferences. So, when I was preparing another conference presentation, I was not particularly anxious about it – it was a routine thing. The topic of the presentation was sociological, and two pictures were central to it. As you can imagine, the first thing I did was checking if the images were available on the Commons. They were! Moreover, both of them had been discussed and widely approved in a consensus as appropriate for encyclopedia, because of their educational and artistic value. I placed them on one slide, in two corners, filled the rest with text, and went on. Yet, immediately after delivering the presentation I found out that some people in the audience were really upset by the images. I sincerely apologized to them, spent some time in discussing their perception and receiving their feedback, and hopefully, understood it. They were absolutely right, I should not have used the images at all, and just describe them in a narrative, since they were simply overly explicit, even though both were drawings.

And then I started to think why wasn’t I reflexive enough in the first place. I am reasonably experienced with presentations. Also, intelligent perhaps a bit above average. While my experience with controversial images is close to zero (or was, as since then I went through the Commons to know more about it), I do have experience in editing and creating articles on LGBTQ and gender studies topics, which are sometimes considered controversial.

Granted, I was used to academic conference standards, and the audience was not entirely academic. Also, I was engrossed in the field material so much, that since it was related to the community I was addressing, I assumed everybody in the audience is familiar with it anyway. Clearly, I lacked the general sensitivity as well. I may have been also just plain careless, and was definitely also very stupid in my actions. But ultimately, apart from all these reasons, I think there was something about the trust I had in the Commons in the sense that I unconsciously assumed that whatever is deemed appropriate for encyclopedia, should not be considered controversial in a scholarly presentation, especially when the images are immediately relevant to the topic.

I've been involved in Wikimedia projects for 6 years. Yet, I saw no warning light when using the images, since I (again, non-reflexively) trusted that materials recognized as educational or artistic by our community, and confirmed as such through a consensus, are safe to use.

In many discussions about filtering we have had over the years, an argument of context has been raised quite often: according to it, as long as the images are placed in relevant articles, they are seen only by people actively searching for them and expecting the outcome.

This argument is slightly flawed for two reasons. Firstly, the resources of the Commons are employed in many more uses than just for Wikimedia projects. Whoever uses them definitely makes ultimate decisions, but may be naively trusting the community’s judgment and easily extrapolate it beyond the assumed context.

Secondly, even for articles on Wikipedia this argument does not always work. For instance, some images, disturbing to some people, may be found as art. More typically, they are stumbled upon because people seek information (while not necessarily explicit demonstrations). Giving another personal example: I myself, being a non-native English speaker, learned the meaning of the word "fisting" from Wikipedia, not really expecting any sexual content at all. Was I shocked by the images? Not particularly, but I definitely was not aware, that I am going to see them.

Thus, basing on my personal experience and personal failure in recognizing the sensitivities of people who are different from me, I’ve come up with a working proposal, which I hope you will help improve, unless it is failed beyond repair.


The proposal

[edit]

The proposal is quite simple and the idea has been around, but I believe that the moment for implementing it now is better than ever. We have had a long discussion on image filtering on a global level. While a majority supports the idea, there is a minority of us who have strong feelings about it and a consensus is difficult to reach globally.

Thus, I propose something softer, and more addressing the diversity of the projects. I suggest introducing an “advisory” label/category for visual materials used on our projects (I’m initiating this discussion on meta, since it is relevant to all of us, but ultimately the Commons community has a say in it as well, clearly). It would be applied to the images, which in the best judgment of editors could be disturbing to some groups of people. This category could be broad (I could imagine including e.g. sexual materials, photos of accidents and dead bodies, or even occasionally religious content under this label), and it definitely could use sub-categories. If the existing categories system would be too universal, we could rely on tagging images for each project individually (following the inter-wiki logic), even though then the images could only be identified as "requiring advisory in some cultures" on the Commons.

One could say that defining what requires advisory is arbitrary, extremely broad, and culture-dependent, and I definitely agree. However, I'd trust our community's wisdom in determining the practicalities, as we have successfully done in the past (e.g. by being able to distinguish porn from art). This is just adding "use caution" sign, and its universality may actually be an asset (this does not have to apply just to explicit images, it would be just a common sense warning, established through a consensus, if in doubt).

A great advantage of such a solution is that it would take into account the fact that different projects and cultures have different needs. Some projects would decide not to use this category for any warnings and that's fine. Local projects know best how their readers react, and to what extent the cultures they operate in are sensitive to explicit images (a factoid: computer games in the US avoid nudity, while in Germany they avoid showing red blood). Some projects, however, could decide to use this categorization for optional filtering, in a form decided by them (again, many different approaches could be conceivable, from a short textual warning, to an image filter, depending on the actual needs decided by each project separately). If we used clever subcategories, projects could decide about different kinds of controversy differently. Additionally, since images from the Commons are used much more widely than just throughout Wikimedia projects, the advisory label would serve well to people less versed with what can be found there, and provide a useful warning to people outside of our community.

Ultimately, this would also allow logged-in Wikipedians to set up their own filters, basing on their own preferences, which would be an additional advantage of registering an account.

With such approach, the typical 'free speech' counterargument also weakens a little. It is more about giving the local projects, and ultimately also local readers a choice, and about caring about sensitivities other than our own. Otherwise there always can be another good will, non-reflexive idiot, who will just not consider that using caution should be advised.

Any comments on this essay/draft of the proposal are very welcome. Pundit (talk) 22:48, 15 July 2012 (UTC)