Some smart words about the generalities of data turnout
Response rates and data quality/limitations
We had a low response rate for online writing contests. One program leader reported on two writing contests directly through our survey. To fill in data gaps, we mined data on six additional writing contests from information available publicly on wiki (read more here about data mining). This data provided us information on program dates, budgets, number of participants, content creation/improvement, and content quality. When possible, we worked directly with program leaders to confirm mined data.
In total, this current report features data from three Wikipedia language versions. We were able to pull limited information about user retention; however, we need more data to learn more. We weren't able to pull data on how much text was added to the Wikipedia article namespace. This is because contests run over extended periods of time, and participants often make edits outside the contest subject areas. As with all program report data reviewed in this report, report data were often partial and incomplete; please refer to the notes, if any, in the bottom left corner of each graph below.
Priority goals
Onwiki writing contests have five presumed priority goals
We were unable to determine priority goals for writing contests, since we received only one direct report. While that report did include priority goals, one report is not enough to speak for all program leaders, so we have left it out of this report.
However, are able to determine a selection of presumed priority goals based on past logic model session workshops with the community, and the response from the one direct report we received. Those goals are: