Learning and Evaluation/Evaluation reports/2015/Writing Contests/Limitations
Peer Review in Process now through July 3
The Learning & Evaluation from the WMF will be actively reading and responding to comments and questions until July 3.
Please submit comments, questions, or suggestions on the talk page!
What is a Writing Contest?
How deep do the data go?
What are the programs goals?
Program History [edit] |
On-wiki writing contests are ways for Wikipedians to come together, create community and improve the quality and quantity of Wikipedia articles. Contests run for a set period of time, from one week to almost one year and take place entirely on a Wikimedia project. Contests are generally planned and managed by long-term Wikipedians, who develop the concept, subject focus (if any), rules, rating system, and prizes. In some cases, contests are supported or organized by Wikimedia organizations (like Chapters) or in collaboration with other organizational partners (e.g. GLAM institutions). The main activity of participants in writing contests is creating and improving articles during a specific time period. Points may be given for the size or quality of articles or improvements made. At the end of the contest period, program leaders or juries tally the points or review contributions, and may award prizes or recognize participants. We have seen writing contests in the Wikimedia movement be adapted to achieve different goals. While many contests were organized directly by communities (and many still are, such as the Producer Prize on Arabic Wikipedia), Wikimedia organizations and cultural institutions have also played important roles in organizing contests. Here are a few examples of writing contests over the years:
| |
Response rates and data quality/limitations [edit] |
In this round of reporting, a total of 61 different contests were identified across several Wikimedia communities. Of these, 39 contests – reported by 17 organizations and individuals – had accessible and relevant data that could be used for this report and all were from Wikipedia projects. The data in these reports are only a subset of relevant metrics for these programs and primarily include inputs and outputs and do not capture broader outcomes. [1] Furthermore, the contests included in this report pertain to Wikipedia writing contests, and not other Wikimedia projects, because were able to identify and obtain their data.
In addition to data reported directly through voluntary reporting, we were able to mine for and fill many data gaps using publicly available information on websites of content organizers and Wikimedia project pages. The data we mined included numbers of participants, event start and end times, bytes added and bytes removed, articles created or articles improved, and ratings of article quality (Featured or Good). Using tools like Wikimetrics and Quarry, we were able to retrieve additional content or user metrics for 30 contests. [2]
We know that some data may have been undercounted due to missed tracking or challenges in mining data. [3] Only a minority of events reported key inputs such as budget, staff hours, and volunteer hours, and this information cannot be mined. Further, some metrics are not universally consistent. A key example is how language Wikipedias may have different processes around selecting articles of high quality. This makes any analysis of quality articles across Wikipedia projects difficult. [4] Thus, while we explore these input, output, and outcome metrics, we cannot draw many strong conclusions about program scale or how inputs influence outputs or outcomes.
Lastly, the data for writing contests are not normally distributed; the distributions are, for the most part, skewed. This does not allow for rigorous statistical analysis of the data. Instead, we present the median and ranges of metrics and use the term average to refer to the median average, since the median is a more statistically robust average than the arithmetic mean. To give a complete picture of the distribution of data, we include the means and standard deviations as references.
| |
Priority goals [edit] |
We asked program leaders to share their priority goals for each writing contest.[5] The most commonly selected goals were "Increasing Diversity of Information Coverage", "Building and Engaging Community", "Increasing Awareness of Wikimedia Projects", "Recruiting and/or converting New Users". The second most reported goals were "Increasing Contributions", "Increased Reader Satisfaction", "Increasing Volunteer Motivation and Commitment", "Retaining and Activating Existing Editors", "Increasing Skills for Editing/Contributing".[6] Five program leaders reported priority goals for seven writing contests. The number of priority goals selected ranged from 4 to 18; the average number selected was 13.[7] The table below lists program leader goals from most often selected to least.
|